<li><p>Now the item is (hopefully) really gone and I can continue to troubleshoot the issue with REST API’s <code>/items/find-by-metadata-value</code> endpoint</p>
<li><p>Some are in the <code>workspaceitem</code> table (pre-submission), others are in the <code>workflowitem</code> table (submitted), and others are actually approved, but withdrawn…</p>
<li>This is actually a worthless exercise because the real issue is that the <code>/items/find-by-metadata-value</code> endpoint is simply designed flawed and shouldn’t be fatally erroring when the search returns items the user doesn’t have permission to access</li>
<li>It would take way too much time to try to fix the fucked up items that are in limbo by deleting them in SQL, but also, it doesn’t actually fix the problem because some items are <em>submitted</em> but <em>withdrawn</em>, so they actually have handles and everything</li>
<li>I think the solution is to recommend people don’t use the <code>/items/find-by-metadata-value</code> endpoint</li>
<li>They asked in 2018-09 as well and I told them it wasn’t possible</li>
<li>To make sure, I looked at <ahref="https://wiki.duraspace.org/display/DSPACE/Enable+Media+RSS+Feeds">the documentation for RSS media feeds</a> and tried it, but couldn’t get it to work</li>
<li>It seems to be geared towards iTunes and Podcasts… I dunno</li>
<li>Run all system updates on DSpace Test (linode19) and reboot it</li>
<li>Merge changes into the <code>5_x-prod</code> branch of CGSpace:
<ul>
<li>Updates to remove deprecated social media websites (Google+ and Delicious), update Twitter share intent, and add item title to Twitter and email links (<ahref="https://github.com/ilri/DSpace/pull/421">#421</a>)</li>
<li>Add new CCAFS Phase II project tags (<ahref="https://github.com/ilri/DSpace/pull/420">#420</a>)</li>
<li>Add item ID to REST API error logging (<ahref="https://github.com/ilri/DSpace/pull/422">#422</a>)</li>
</ul></li>
<li>Re-deploy CGSpace from <code>5_x-prod</code> branch</li>
<li>Run all system updates on CGSpace (linode18) and reboot it</li>
<li><p>Strangely enough, I <em>do</em> see the statistics-2018, statistics-2017, etc cores in the Admin UI…</p></li>
<li><p>I restarted Tomcat a few times (and even deleted all the Solr write locks) and at least five times there were issues loading one statistics core, causing the Atmire stats to be incomplete</p>
<ul>
<li>Also, I tried to increase the <code>writeLockTimeout</code> in <code>solrconfig.xml</code> from the default of 1000ms to 10000ms</li>
<li>Eventually the Atmire stats started working, despite errors about “Error opening new searcher” in the Solr Admin UI</li>
<li>I wrote to the dspace-tech mailing list again on the thread from March, 2019</li>
<p><imgsrc="/cgspace-notes/2019/05/2019-05-06-cpu-day.png"alt="linode18 CPU day"/></p>
<ul>
<li><p>The number of unique sessions today is <em>ridiculously</em> high compared to the last few days considering it’s only 12:30PM right now:</p>
<li><p>The number of unique IP addresses from 2 to 6 AM this morning is already several times higher than the average for that time of the morning this past week:</p>
<li><p>I’m not exactly sure what happened this morning, but it looks like some legitimate user traffic—perhaps someone launched a new publication and it got a bunch of hits?</p></li>
<li><p>Looking again, I see 84,000 requests to <code>/handle</code> this morning (not including logs for library.cgiar.org because those get HTTP 301 redirect to CGSpace and appear here in <code>access.log</code>):</p>
<li><p>But it would be difficult to find a pattern for those requests because they cover 78,000 <em>unique</em> Handles (ie direct browsing of items, collections, or communities) and only 2,492 discover/browse (total, not unique):</p>
<li><p>According to <ahref="https://viewdns.info/reverseip/?host=63.32.242.35&t=1">viewdns.info</a> that server belongs to Macaroni Brothers’</p>
<ul>
<li>The user agent of their non-REST API requests from the same IP is Drupal</li>
<li>This is one very good reason to limit REST API requests, and perhaps to enable caching via nginx</li>
<li><p>A user said that CGSpace emails have stopped sending again</p>
<ul>
<li><p>Indeed, the <code>dspace test-email</code> script is showing an authentication failure:</p>
<pre><code>$ dspace test-email
About to send test email:
- To: wooooo@cgiar.org
- Subject: DSpace test email
- Server: smtp.office365.com
Error sending email:
- Error: javax.mail.AuthenticationFailedException
Please see the DSpace documentation for assistance.
</code></pre></li>
</ul></li>
<li><p>I checked the settings and apparently I had updated it incorrectly last week after ICT reset the password</p></li>
<li><p>Help Moayad with certbot-auto for Let’s Encrypt scripts on the new AReS server (linode20)</p></li>
<li><p>Normalize all <code>text_lang</code> values for metadata on CGSpace and DSpace Test (as I had tested last month):</p>
<pre><code>UPDATE metadatavalue SET text_lang='en_US' WHERE resource_type_id=2 AND metadata_field_id != 28 AND text_lang IN ('ethnob', 'en', '*', 'E.', '');
UPDATE metadatavalue SET text_lang='en_US' WHERE resource_type_id=2 AND metadata_field_id != 28 AND text_lang IS NULL;
UPDATE metadatavalue SET text_lang='es_ES' WHERE resource_type_id=2 AND metadata_field_id != 28 AND text_lang IN ('es', 'spa');
</code></pre></li>
<li><p>Send Francesca Giampieri from Bioversity a CSV export of all their items issued in 2018</p>
<ul>
<li>They will be doing a migration of 1500 items from their TYPO3 database into CGSpace soon and want an example CSV with all required metadata columns</li>
<li>I finally had time to analyze the 7,000 IPs from the major traffic spike on 2019-05-06 after several runs of my <code>resolve-addresses.py</code> script (ipapi.co has a limit of 1,000 requests per day)</li>
<li>Resolving the unique IP addresses to organization and AS names reveals some pretty big abusers:
<ul>
<li>1213 from Region40 LLC (AS200557)</li>
<li>697 from Trusov Ilya Igorevych (AS50896)</li>
<li>687 from UGB Hosting OU (AS206485)</li>
<li>620 from UAB Rakrejus (AS62282)</li>
<li>491 from Dedipath (AS35913)</li>
<li>476 from Global Layer B.V. (AS49453)</li>
<li>333 from QuadraNet Enterprises LLC (AS8100)</li>
<li>278 from GigeNET (AS32181)</li>
<li>261 from Psychz Networks (AS40676)</li>
<li>196 from Cogent Communications (AS174)</li>
<li>125 from Blockchain Network Solutions Ltd (AS43444)</li>
<li>118 from Silverstar Invest Limited (AS35624)</li>
</ul></li>
<li><p>All of the IPs from these networks are using generic user agents like this, but MANY more, and they change many times:</p>
<pre><code>"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2703.0 Safari/537.36"
</code></pre></li>
<li><p>I found a <ahref="https://www.qurium.org/alerts/azerbaijan/azerbaijan-and-the-region40-ddos-service/">blog post from 2018 detailing an attack from a DDoS service</a> that matches our pattern exactly</p></li>
<li><p>They specifically mention:</p></li>
</ul>
<pre>The attack that targeted the “Search” functionality of the website, aimed to bypass our mitigation by performing slow but simultaneous searches from 5500 IP addresses.</pre>
<li>So this was definitely an attack of some sort… only God knows why</li>
<li>I noticed a few new bots that don’t use the word “bot” in their user agent and therefore don’t match Tomcat’s Crawler Session Manager Valve:
<li>Tezira says she’s having issues with email reports for approved submissions, but I received an email about collection subscriptions this morning, and I tested with <code>dspace test-email</code> and it’s also working…</li>
<li>Send a list of DSpace build tips to Panagis from AgroKnow</li>