<li>Then on DSpace Test I created a <code>statistics-2019</code> core with the same instance dir as the main <code>statistics</code> core (as <ahref="https://wiki.lyrasis.org/display/DSDOC6x/Testing+Solr+Shards">illustrated in the DSpace docs</a>)</li>
<li>The key thing above is that you create the core in the Solr admin UI, but the data directory must already exist so you have to do that first in the file system</li>
<li>I restarted the server after the import was done to see if the cores would come back up OK
<ul>
<li>I remember last time I tried this the manually created statistics cores didn’t come back up after I rebooted, but this time they did</li>
</ul>
</li>
</ul>
<h2id="2021-11-03">2021-11-03</h2>
<ul>
<li>While inspecting the stats for the new statistics-2019 shard on DSpace Test I noticed that I can’t find any stats via the DSpace Statistics API for an item that <em>should</em> have some
<ul>
<li>I checked on CGSpace’s and I can’t find them there either, but I see them in Solr when I query in the admin UI</li>
<li>I need to debug that, but it doesn’t seem to be related to the sharding…</li>
</span><spanstyle="color:#960050;background-color:#1e0010"></span>Please see the DSpace documentation for assistance.
</code></pre></div><ul>
<li>I sent a message to ILRI ICT to ask them to check the account/password</li>
<li>I want to do one last test of the Elasticsearch updates on OpenRXV so I got a snapshot of the latest Elasticsearch volume used on the production AReS instance:</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><codeclass="language-console"data-lang="console"># tar czf openrxv_esData_7.tar.xz /var/lib/docker/volumes/openrxv_esData_7
<li>I migrated the 2013, 2012, and 2011 statistics to yearly shards on DSpace Test’s Solr to continute my testing of memory / latency impact</li>
<li>I found out why the CI jobs for the DSpace Statistics API had been failing the past few weeks
<ul>
<li>When I reverted to using the original falcon-swagger-ui project after they apparently merged my Falcon 3 changes, it seems that they actually only merged the Swagger UI changes, not the Falcon 3 fix!</li>
<li>I switched back to using my own fork and now it’s working</li>
<li>Unfortunately now I’m getting an error installing my dependencies with Poetry:</li>
</span><spanstyle="color:#960050;background-color:#1e0010"></span>at /usr/lib/python3.9/site-packages/poetry/installation/chooser.py:72 in choose_for
68│
69│ links.append(link)
70│
71│ if not links:
→ 72│ raise RuntimeError(
73│ "Unable to find installation candidates for {}".format(package)
74│ )
75│
76│ # Get the best link
</code></pre></div><ul>
<li>So that’s super annoying… I’m going to try using Pipenv again…</li>
</ul>
<h2id="2021-11-10">2021-11-10</h2>
<ul>
<li>93.158.91.62 is scraping us again
<ul>
<li>That’s an IP in Sweden that is clearly a bot, but pretending to use a normal user agent</li>
<li>I added them to the “bot” list in nginx so the requests will share a common DSpace session with other bots and not create Solr hits, but still they are causing high outbound traffic</li>
<li>I modified the nginx configuration to send them an HTTP 403 and tell them to use a bot user agent</li>
</ul>
</li>
</ul>
<h2id="2021-11-14">2021-11-14</h2>
<ul>
<li>I decided to update AReS to the latest OpenRXV version with Elasticsearch 7.13
<ul>
<li>First I took backups of the Elasticsearch volume and OpenRXV backend data:</li>
</ul>
</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><codeclass="language-console"data-lang="console">$ docker-compose down
$ sudo tar czf openrxv_esData_7-2021-11-14.tar.xz /var/lib/docker/volumes/openrxv_esData_7
$ cp -a backend/data backend/data.2021-11-14
</code></pre></div><ul>
<li>Then I checked out the latest git commit, updated all images, rebuilt the project:</li>
</span><spanstyle="color:#960050;background-color:#1e0010"></span>Total number of bot hits purged: 10893
</code></pre></div><ul>
<li>I did a bit more work documenting and tweaking the PostgreSQL configuration for CGSpace and DSpace Test in the Ansible infrastructure playbooks
<ul>
<li>I finally deployed the changes on both servers</li>
</ul>
</li>
</ul>
<h2id="2021-11-22">2021-11-22</h2>
<ul>
<li>Udana asked me about validating on OpenArchives again
<ul>
<li>According to my notes we actually completed this in 2021-08, but for some reason we are no longer on the list and I can’t validate again</li>
<li>There seems to be a problem with their website because every link I try to validate says it received an HTTP 500 response from CGSpace</li>
<li>I sent an email to the OpenArchives.org contact to ask for help with the OAI validator
<ul>
<li>Someone responded to say that there have been a number of complaints about this on the oai-pmh mailing list recently…</li>
</ul>
</li>
<li>I sent an email to Pythagoras from GARDIAN to ask if they can use a more specific user agent than “Microsoft Internet Explorer” for their scraper
<ul>
<li>He said he will change the user agent</li>
</ul>
</li>
</ul>
<h2id="2021-11-24">2021-11-24</h2>
<ul>
<li>I had an idea to check our Solr statistics for hits from all the IPs that I have listed in nginx as being bots
<ul>
<li>Other than a few that I ruled out that <em>may</em> be humans, these are all making requests within one month or with no user agent, which is highly suspicious:</li>
<li>Peter sent me corrections for the authors that I had sent him back in 2021-09
<ul>
<li>I did a quick sanity check on them with OpenRefine, filtering out all the metadata with no replacements, then ran through my csv-metadata-quality script</li>
<li>Then I imported them into my local instance as a test:</li>