<li>Peter emailed to point out that many items in the <ahref="https://cgspace.cgiar.org/handle/10568/2703">ILRI archive collection</a> have multiple handles:</li>
<li>Peter Ballantyne said he was having problems logging into CGSpace with “both” of his accounts (CGIAR LDAP and personal, apparently)</li>
<pre><code>2017-10-01 20:24:57,928 WARN org.dspace.authenticate.LDAPAuthentication @ anonymous:session_id=CA0AA5FEAEA8805645489404CDCE9594:ip_addr=41.204.190.40:ldap_attribute_lookup:type=failed_search javax.naming.CommunicationException\colon; svcgroot2.cgiarad.org\colon;3269 [Root exception is java.net.ConnectException\colon; Connection timed out (Connection timed out)]
2017-10-01 20:22:37,982 INFO org.dspace.authenticate.LDAPAuthentication @ anonymous:session_id=CA0AA5FEAEA8805645489404CDCE9594:ip_addr=41.204.190.40:failed_login:no DN found for user pballantyne
<li>For what it's worth, there are no errors on any other recent days, so it must have been some network issue on Linode or CGNET's LDAP server</li>
<li>Linode emailed to say that linode578611 (DSpace Test) needs to migrate to a new host for a security update so I initiated the migration immediately rather than waiting for the scheduled time in two weeks</li>
<li>We'll need to check for browse links and handle them properly, including swapping the <code>subject</code> parameter for <code>systemsubject</code> (which doesn't exist in Discovery yet, but we'll need to add it) as we have moved their poorly curated subjects from <code>dc.subject</code> to <code>cg.subject.system</code></li>
<li>The second link was a direct link to a bitstream which has broken due to the sequence being updated, so I told him he should link to the handle of the item instead</li>
<li>Help Sisay proof sixty-two IITA records on DSpace Test</li>
<li>Lots of inconsistencies and errors in subjects, dc.format.extent, regions, countries</li>
<li>Merge the Discovery search changes for ISI Journal (<ahref="https://github.com/ilri/DSpace/pull/341">#341</a>)</li>
<li>Working on the nginx redirects for CGIAR Library</li>
<li>We should start using 301 redirects and also allow for <code>/sitemap</code> to work on the library.cgiar.org domain so the CGIAR System Organization people can update their Google Search Console and allow Google to find their content in a structured way</li>
<li>Remove eleven occurrences of <code>ACP</code> in IITA's <code>cg.coverage.region</code> using the Atmire batch edit module from Discovery</li>
<li>Need to investigate how we can verify the library.cgiar.org using the HTML or DNS methods</li>
<li>Run corrections on 143 ILRI Archive items that had two <code>dc.identifier.uri</code> values (Handle) that Peter had pointed out earlier this week</li>
<li>I used OpenRefine to isolate them and then fixed and re-imported them into CGSpace</li>
<li>I manually checked a dozen of them and it appeared that the correct handle was always the second one, so I just deleted the first one</li>
<li>I saw a nice tweak to thumbnail presentation on the Cardiff Metropolitan University DSpace: <ahref="https://repository.cardiffmet.ac.uk/handle/10369/8780">https://repository.cardiffmet.ac.uk/handle/10369/8780</a></li>
<li>It adds a subtle border and box shadow, before and after:</li>
<li>Deploy logic to allow verification of the library.cgiar.org domain in the Google Search Console (<ahref="https://github.com/ilri/DSpace/pull/343">#343</a>)</li>
<li>After verifying both the HTTP and HTTPS domains and submitting a sitemap it will be interesting to see how the stats in the console as well as the search results change (currently 28,500 results):</li>
<li>I tried to submit a “Change of Address” request in the Google Search Console but I need to be an owner on CGSpace's console (currently I'm just a user) in order to do that</li>
<li>Peter added me as an owner on the CGSpace property on Google Search Console and I tried to submit a “Change of Address” request for the CGIAR Library but got an error:</li>
<li>We are sending top-level CGIAR Library traffic to their specific community hierarchy in CGSpace so this type of change of address won't work—we'll just need to wait for Google to slowly index everything and take note of the HTTP 301 redirects</li>
<li>Also the Google Search Console doesn't work very well with Google Analytics being blocked, so I had to turn off my ad blocker to get the “Change of Address” tool to work!</li>
<li>Finally finish (I think) working on the myriad nginx redirects for all the CGIAR Library browse stuff—it ended up getting pretty complicated!</li>
<li>I still need to commit the DSpace changes (add browse index, XMLUI strings, Discovery index, etc), but I should be able to deploy that on CGSpace soon</li>
<li>Run system updates on DSpace Test and reboot server</li>
<li>Merge changes adding a search/browse index for CGIAR System subject to <code>5_x-prod</code> (<ahref="https://github.com/ilri/DSpace/pull/344">#344</a>)</li>
<li>I checked the top browse links in Google's search results for <code>site:library.cgiar.org inurl:browse</code> and they are all redirected appropriately by the nginx rewrites I worked on last week</li>
<li>Run system updates on DSpace Test and reboot server</li>
<li>Re-deploy CGSpace from latest <code>5_x-prod</code> (adds ISI Journal to search filters and adds Discovery index for CGIAR Library <code>systemsubject</code>)</li>
<li>Deploy nginx redirect fixes to catch CGIAR Library browse links (redirect to their community and translate subject→systemsubject)</li>
<li>Run migration of CGSpace server (linode18) for Linode security alert, which took 42 minutes of downtime</li>
<li>I still have no idea what was causing the load to go up today</li>
<li>I finally investigated Magdalena's issue with the item download stats and now I can't reproduce it: I get the same number of downloads reported in the stats widget on the item page, the “Most Popular Items” page, and in Usage Stats</li>
<li>I think it might have been an issue with the statistics not being fresh</li>
<li>I added the admin group for the systems organization to the admin role of the top-level community of CGSpace because I guess Sisay had forgotten</li>
<li>Magdalena asked if there was a way to reuse data in item submissions where items have a lot of similar data</li>
<li>I told her about the possibility to use per-collection item templates, and asked if her items in question were all from a single collection</li>
<li>We've never used it but it could be worth looking at</li>
<li>CORE seems to be some bot that is “Aggregating the world’s open access research papers”</li>
<li>The contact address listed in their bot's user agent is incorrect, correct page is simply: <ahref="https://core.ac.uk/contact">https://core.ac.uk/contact</a></li>
<li>I will check the logs in a few days to see if they are harvesting us regularly, then add their bot's user agent to the Tomcat Crawler Session Valve</li>
<li>After browsing the CORE site it seems that the CGIAR Library is somehow a member of CORE, so they have probably only been harvesting CGSpace since we did the migration, as library.cgiar.org directs to us now</li>
<li>For now I will just contact them to have them update their contact info in the bot's user agent, but eventually I think I'll tell them to swap out the CGIAR Library entry for CGSpace</li>
<li>Surprise surprise, the CORE bot is likely responsible for the recent load issues, making hundreds of thousands of requests yesterday and today:</li>
<li>At least we know the top two are CORE, but who are the others?</li>
<li>190.19.92.5 is apparently in Argentina, and 104.196.152.243 is from Google Cloud Engine</li>
<li>Actually, these two scrapers might be more responsible for the heavy load than the CORE bot, because they don't reuse their session variable, creating thousands of new sessions!</li>
<li>From looking at the requests, it appears these are from CIAT and CCAFS</li>
<li>I wonder if I could somehow instruct them to use a user agent so that we could apply a crawler session manager valve to them</li>
<li>Actually, according to the Tomcat docs, we could use an IP with <code>crawlerIps</code>: <ahref="https://tomcat.apache.org/tomcat-7.0-doc/config/valve.html#Crawler_Session_Manager_Valve">https://tomcat.apache.org/tomcat-7.0-doc/config/valve.html#Crawler_Session_Manager_Valve</a></li>
<li>Ah, wait, it looks like <code>crawlerIps</code> only came in 2017-06, so probably isn't in Ubuntu 16.04's 7.0.68 build!</li>
<li>That would explain the errors I was getting when trying to set it:</li>
<pre><code>WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Valve} Setting property 'crawlerIps' to '190\.19\.92\.5|104\.196\.152\.243' did not find a matching property.
<li>I've emailed the CORE people to ask if they can update the repository information from CGIAR Library to CGSpace</li>
<li>Also, I asked if they could perhaps use the <code>sitemap.xml</code>, OAI-PMH, or REST APIs to index us more efficiently, because they mostly seem to be crawling the nearly endless Discovery facets</li>
<li>I added <ahref="https://goaccess.io/">GoAccess</a> to the list of package to install in the DSpace role of the <ahref="https://github.com/ilri/rmg-ansible-public">Ansible infrastructure scripts</a></li>
<li>It makes it very easy to analyze nginx logs from the command line, to see where traffic is coming from:</li>
<li>According to Uptime Robot CGSpace went down and up a few times</li>
<li>I had a look at goaccess and I saw that CORE was actively indexing</li>
<li>Also, PostgreSQL connections were at 91 (with the max being 60 per web app, hmmm)</li>
<li>I'm really starting to get annoyed with these guys, and thinking about blocking their IP address for a few days to see if CGSpace becomes more stable</li>
<li>Actually, come to think of it, they aren't even obeying <code>robots.txt</code>, because we actually disallow <code>/discover</code> and <code>/search-filter</code> URLs but they are hitting those massively:</li>