IWMI notified me that AReS was down with an HTTP 502 error
Looking at UptimeRobot I see it has been down for 33 hours, but I never got a notification
I don’t see anything in the Elasticsearch container logs, or the systemd journal on the host, but I notice that the angular_nginx container isn’t running
I simply started it and AReS was running again:
Export another list of ALL subjects on CGSpace, including AGROVOC and non-AGROVOC for Enrico:
localhost/dspace63= > \COPY (SELECT DISTINCT LOWER(text_value) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242, 187) GROUP BY subject ORDER BY count DESC) to /tmp/2021-07-01-all-subjects.csv WITH CSV HEADER;
IWMI notified me that AReS was down with an HTTP 502 error
Looking at UptimeRobot I see it has been down for 33 hours, but I never got a notification
I don’t see anything in the Elasticsearch container logs, or the systemd journal on the host, but I notice that the angular_nginx container isn’t running
I simply started it and AReS was running again:
Export another list of ALL subjects on CGSpace, including AGROVOC and non-AGROVOC for Enrico:
localhost/dspace63= > \COPY (SELECT DISTINCT LOWER(text_value) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242, 187) GROUP BY subject ORDER BY count DESC) to /tmp/2021-07-01-all-subjects.csv WITH CSV HEADER;
<li>Export another list of ALL subjects on CGSpace, including AGROVOC and non-AGROVOC for Enrico:</li>
</ul>
<pre><codeclass="language-console"data-lang="console">localhost/dspace63= > \COPY (SELECT DISTINCT LOWER(text_value) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242, 187) GROUP BY subject ORDER BY count DESC) to /tmp/2021-07-01-all-subjects.csv WITH CSV HEADER;
COPY 20994
</code></pre><h2id="2021-07-04">2021-07-04</h2>
<li>IWMI notified me that AReS was down with an HTTP 502 error
<ul>
<li>Update all Docker containers on the AReS server (linode20) and rebuild OpenRXV:</li>
<li>Looking at UptimeRobot I see it has been down for 33 hours, but I never got a notification</li>
<li>I don’t see anything in the Elasticsearch container logs, or the systemd journal on the host, but I notice that the <code>angular_nginx</code> container isn’t running</li>
<li>I simply started it and AReS was running again:</li>
</ul>
<pre><codeclass="language-console"data-lang="console">$ cd OpenRXV
$ docker-compose -f docker/docker-compose.yml down
<li><ahref="https://github.com/ilri/OpenRXV/issues/106">Hide the “DSpace add missing items”</a></li>
</ul>
</li>
<li>Rebuild DSpace Test (linode26) from a fresh Ubuntu 20.04 image on Linode</li>
<li>The start plugins on AReS had seventy-five errors from the <code>dspace_add_missing_items</code> plugin for some reason so I had to start a fresh indexing</li>
<li>I noticed that the WorldFish data has dozens of incorrect countries so I should talk to Salem about that because they manage it
<ul>
<li>Also I noticed that we weren’t using the Country formatter in OpenRXV for the WorldFish country field, so some values don’t get mapped properly</li>
<li>I added some value mappings to fix some issues with WorldFish data and added a few more fields to the repository harvesting config and started a fresh re-indexing</li>
<li>I guess this is related to the SMTP issues last week</li>
<li>I had fixed the config, but didn’t restart Tomcat so DSpace didn’t load the new variables</li>
<li>I ran all system updates on CGSpace (linode18) and DSpace Test (linode26) and rebooted the servers</li>
</ul>
</li>
</ul>
<h2id="2021-07-05">2021-07-05</h2>
<h2id="2021-06-03">2021-06-03</h2>
<ul>
<li>The AReS harvesting last night succeeded and I started the plugins</li>
<li>Margarita from CCAFS asked if we can create a new field for AICCRA publications
<li>Meeting with AMCOW and IWMI to discuss AMCOW getting IWMI’s content into the new AMCOW Knowledge Hub
<ul>
<li>I asked her to clarify what they want</li>
<li>AICCRA is an initiative so it might be better to create new field for that, for example <code>cg.contributor.initiative</code></li>
<li>At first we spent some time talking about DSpace communities/collections and the REST API, but then they said they actually prefer to send queries to sites on the fly and cache them in Redis for some time</li>
<li>That’s when I thought they could perhaps use the OpenSearch, but I can’t remember if it’s possible to limit by community, or only collection…</li>
<li>Looking now, I see there is a “scope” parameter that can be used for community or collection, for example:</li>
<li>That will sort by date issued (see: <code>webui.itemlist.sort-option.2</code> in dspace.cfg), give 100 results per page, and start on item 1</li>
<li>Otherwise, another alternative would be to use the IWMI CSV that we are already exporting every week</li>
<li>Fill out the <em>CGIAR-AGROVOC Task Group: Survey on the current CGIAR use of AGROVOC</em> survey on behalf of CGSpace</li>
</ul>
<h2id="2021-06-06">2021-06-06</h2>
<ul>
<li>Atmire merged my spider user agent changes from last month so I will update the <code>example</code> list we use in DSpace and remove the new ones from my <code>ilri</code> override file
<li>The Elasticsearch indexes are messed up so I dumped and re-created them correctly:</li>
Purging 601 hits from Microsoft Office Existence Discovery in statistics
Purging 51 hits from Site24x7 in statistics
Purging 62 hits from Trello in statistics
Purging 13574 hits from WhatsApp in statistics
Purging 144 hits from FlipboardProxy in statistics
Purging 37 hits from LinkWalker in statistics
Purging 1 hits from [Ll]ink.?[Cc]heck.? in statistics
Purging 427 hits from WordPress in statistics
<h2id="2021-06-14">2021-06-14</h2>
<ul>
<li>Dump and re-create indexes on AReS (as above) so I can do a harvest</li>
</ul>
<h2id="2021-06-16">2021-06-16</h2>
<ul>
<li>Looking at the Solr statistics on CGSpace for last month I see many requests from hosts using seemingly normal Windows browser user agents, but using the MSN bot’s DNS
<ul>
<li>For example, user agent <code>Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0; Trident/5.0)</code> with DNS <code>msnbot-131-253-25-91.search.msn.com.</code></li>
<li>I queried Solr for all hits using the MSN bot DNS (<code>dns:*msnbot* AND dns:*.msn.com.</code>) and found 457,706</li>
<li>I extracted their IPs using Solr’s CSV format and ran them through my <code>resolve-addresses.py</code> script and found that they all belong to MICROSOFT-CORP-MSN-AS-BLOCK (AS8075)</li>
<li>Note that <ahref="https://www.bing.com/webmasters/help/how-to-verify-bingbot-3905dc26">Microsoft’s docs say that reverse lookups on Bingbot IPs will always have “search.msn.com”</a> so it is safe to purge these as non-human traffic</li>
<li>I purged the hits with <code>ilri/check-spider-ip-hits.sh</code> (though I had to do it in 3 batches because I forgot to increase the <code>facet.limit</code> so I was only getting them 100 at a time)</li>
</ul>
</li>
<li>Moayad sent a pull request a few days ago to re-work the harvesting on OpenRXV
<ul>
<li>It will hopefully also fix the duplicate and missing items issues</li>
<li>I had a Skype with him to discuss</li>
<li>I got it running on podman-compose, but I had to fix the storage permissions on the Elasticsearch volume after the first time it tries (and fails) to run:</li>
<li>Then I uploaded the resulting CSV to CGSpace, updating 161 items</li>
<li>Start a harvest on AReS</li>
<li>I found <ahref="https://jira.lyrasis.org/browse/DS-1977">a bug</a> and <ahref="https://github.com/DSpace/DSpace/pull/2584">a patch</a> for the private items showing up in the DSpace sitemap bug
<ul>
<li>The fix is super simple, I should try to apply it</li>
</ul>
</li>
</ul>
<h2id="2021-06-21">2021-06-21</h2>
<ul>
<li>The AReS harvesting finished, but the indexes got messed up again</li>
<li>I was looking at the JSON export I made yesterday and trying to understand the situation with duplicates
<ul>
<li>We have 90,000+ items, but only 85,000 unique:</li>
<li>Some appear twice in the Elasticsearch index, but appear in only one collection</li>
<li>Some appear twice in the Elasticsearch index, and appear in <em>two</em> collections</li>
<li>Some appear twice in the Elasticsearch index, but appear in three collections (!)</li>
</ul>
</li>
<li>So really we need to just check whether a handle exists before we insert it</li>
<li>I tested the <ahref="https://github.com/DSpace/DSpace/pull/2584">pull request for DS-1977</a> that adjusts the sitemap generation code to exclude private items
<ul>
<li>It applies cleanly and seems to work, but we don’t actually have any private items</li>
<li>The issue we are having with AReS hitting restricted items in the sitemap is that the items have restricted metadata, not that they are private</li>
</ul>
</li>
<li>Testing the <ahref="https://github.com/DSpace/DSpace/pull/2275">pull request for DS-4065</a> where the REST API’s <code>/rest/items</code> endpoint is not aware of private items and returns an incorrect number of items
<ul>
<li>This is most easily seen by setting a low limit in <code>/rest/items</code>, making one of the items private, and requesting items again with the same limit</li>
<li>I confirmed the issue on the current DSpace 6 Demo:</li>
<li>I tested the pull request on DSpace Test and it works, so I left a note on GitHub and Jira</li>
<li>Last week I noticed that the Gender Platform website is using “cgspace.cgiar.org” links for CGSpace, instead of handles
<ul>
<li>I emailed Fabio and Marianne to ask them to please use the Handle links</li>
</ul>
</li>
<li>I tested the <ahref="https://github.com/DSpace/DSpace/pull/2543">pull request for DS-4271</a> where Discovery filters of type “contains” don’t work as expected when the user’s search term has spaces
<ul>
<li>I tested with filter “farmer managed irrigation systems” on DSpace Test</li>
<li>Before the patch I got 293 results, and the few I checked didn’t have the expected metadata value</li>
<li>After the patch I got 162 results, and all the items I checked had the exact metadata value I was expecting</li>
</ul>
</li>
<li>I tested a fresh harvest from my local AReS on DSpace Test with the DS-4065 REST API patch and here are my results:
<ul>
<li>90459 in final from last harvesting</li>
<li>90307 in temp after new harvest</li>
<li>90327 in temp after start plugins</li>
</ul>
</li>
<li>The 90327 number seems closer to the “real” number of items on CGSpace…
<ul>
<li>Seems close, but not entirely correct yet:</li>
<li>Make a <ahref="https://github.com/atmire/COUNTER-Robots/pull/43">pull request</a> to the COUNTER-Robots project to add two new user agents: crusty and newspaper
<ul>
<li>These two bots have made ~3,000 requests on CGSpace</li>
<li>Then I added them to our local bot override in CGSpace (until the above pull request is merged) and ran my bot checking script:</li>
<li>Meet with the CGIAR–AGROVOC task group to discuss how we want to do the workflow for submitting new terms to AGROVOC</li>
<li>I extracted another list of all subjects to check against AGROVOC:</li>
</ul>
<pre><codeclass="language-console"data-lang="console">\COPY (SELECT DISTINCT(LOWER(text_value)) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242, 187) GROUP BY subject ORDER BY count DESC) to /tmp/2021-07-06-all-subjects.csv WITH CSV HEADER;
$ csvcut -c 1 /tmp/2021-07-06-all-subjects.csv | sed 1d > /tmp/2021-07-06-all-subjects.txt
<li>Looking into the database issues we had last month on 2021-06-23
<li>These bots account for ~42,000 hits in our statistics… I will just purge them and add them to our local override, but I can’t be bothered to submit them to COUNTER-Robots since I’d have to look up the information for each one</li>
<li>I re-synced DSpace Test (linode26) with the assetstore, Solr statistics, and database from CGSpace (linode18)</li>
</ul>
<h2id="2021-06-23">2021-06-23</h2>
<ul>
<li>I think it might have been some kind of attack because the number of XMLUI sessions was through the roof at one point (10,000!) and the number of unique IPs accessing the server that day is much higher than any other day:</li>
<li>I woke up this morning to find CGSpace down
<ul>
<li>The logs show a high number of abandoned PostgreSQL connections and locks:</li>
</ul>
</li>
</ul>
<pre><codeclass="language-console"data-lang="console"># for num in {10..26}; do echo "2021-06-$num"; zcat /var/log/nginx/access.log.*.gz /var/log/nginx/library-access.log.*.gz | grep "$num/Jun/2021" | awk '{print $1}' | sort | uniq | wc -l; done
2021-06-10
10693
2021-06-11
10587
2021-06-12
7958
2021-06-13
7681
2021-06-14
12639
2021-06-15
15388
2021-06-16
12245
2021-06-17
11187
2021-06-18
9684
2021-06-19
7835
2021-06-20
7198
2021-06-21
10380
2021-06-22
10255
2021-06-23
15878
2021-06-24
9963
2021-06-25
9439
2021-06-26
7930
<pre><codeclass="language-console"data-lang="console"># journalctl --since=today -u tomcat7 | grep -c 'Connection has been abandoned'
978
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
10100
</code></pre><ul>
<li>Similarly, the number of connections to the REST API was around the average for the recent weeks before:</li>
</ul>
<pre><codeclass="language-console"data-lang="console"># for num in {10..26}; do echo "2021-06-$num"; zcat /var/log/nginx/rest.*.gz | grep "$num/Jun/2021" | awk '{print $1}' | sort | uniq | wc -l; done
2021-06-10
1183
2021-06-11
1074
2021-06-12
911
2021-06-13
892
2021-06-14
1320
2021-06-15
1257
2021-06-16
1208
2021-06-17
1119
2021-06-18
965
2021-06-19
985
2021-06-20
854
2021-06-21
1098
2021-06-22
1028
2021-06-23
1375
2021-06-24
1135
2021-06-25
969
2021-06-26
904
</code></pre><ul>
<li>According to goaccess, the traffic spike started at 2AM (remember that the first “Pool empty” error in dspace.log was at 4:01AM):</li>
<li>Moayad sent a fix for the add missing items plugins issue (<ahref="https://github.com/ilri/OpenRXV/pull/107">#107</a>)
<li>I sent a message to Atmire, hoping that the database logging stuff they put in place last time this happened will be of help now</li>
<li>In the mean time, I decided to upgrade Tomcat from 7.0.107 to 7.0.109, and the PostgreSQL JDBC driver from 42.2.20 to 42.2.22 (first on DSpace Test)</li>
<li>I also applied the following patches from the 6.4 milestone to our <code>6_x-prod</code> branch:
<ul>
<li>It works MUCH faster because it correctly identifies the missing handles in each repository</li>
<li>Also it adds better debug messages to the api logs</li>
<li>DS-4065: resource policy aware REST API hibernate queries</li>
<li>DS-4271: Replaced brackets with double quotes in SolrServiceImpl</li>
</ul>
</li>
<li>After upgrading and restarting Tomcat the database connections and locks were back down to normal levels:</li>
</ul>
<pre><codeclass="language-console"data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
63
</code></pre><ul>
<li>Looking in the DSpace log, the first “pool empty” message I saw this morning was at 4AM:</li>
</ul>
<pre><codeclass="language-console"data-lang="console">2021-06-23 04:01:14,596 ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper @ [http-bio-127.0.0.1-8443-exec-4323] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:250; busy:250; idle:0; lastwait:5000].
</code></pre><ul>
<li>Oh, and I notice 8,000 hits from a Flipboard bot using this user-agent:</li>
</ul>
<pre><codeclass="language-console"data-lang="console">Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:49.0) Gecko/20100101 Firefox/49.0 (FlipboardProxy/1.2; +http://flipboard.com/browserproxy)
</code></pre><ul>
<li>We can purge them, as this is not user traffic: <ahref="https://about.flipboard.com/browserproxy/">https://about.flipboard.com/browserproxy/</a>
<ul>
<li>I will add it to our local user agent pattern file and eventually submit a pull request to COUNTER-Robots</li>
</ul>
</li>
<li>I merged <ahref="https://github.com/ilri/OpenRXV/pull/96">Moayad’s health check pull request in AReS</a> and I will deploy it on the production server soon</li>
</ul>
<h2id="2021-06-24">2021-06-24</h2>
<ul>
<li>I deployed the new OpenRXV code on CGSpace but I’m having problems with the indexing, something about missing the mappings on the <code>openrxv-items-temp</code> index
<ul>
<li>I extracted the mappings from my local instance using <code>elasticdump</code> and after putting them on CGSpace I was able to harvest…</li>
<li>But still, there are way too many duplicates and I’m not sure what the actual number of items should be</li>
<li>According to the OAI ListRecords for each of our repositories, we should have about:
<ul>
<li>MELSpace: 9537</li>
<li>WorldFish: 4483</li>
<li>CGSpace: 91305</li>
<li>Total: 105325</li>
</ul>
</li>
<li>Looking at the last backup I have from harvesting before these changes we have 104,000 total handles, but only 99186 unique:</li>
<li>So the harvest on the live site is missing items, then why didn’t the add missing items plugin find them?!
<ul>
<li>I notice that we are missing the <code>type</code> in the metadata structure config for each repository on the production site, and we are using <code>type</code> for item type in the actual schema… so maybe there is a conflict there</li>
<li>I will rename type to <code>item_type</code> and add it back to the metadata structure</li>
<li>The add missing items definitely checks this field…</li>
<li>I modified my local backup to add <code>type: item</code> and uploaded it to the temp index on production</li>
<li>Oh! nginx is blocking OpenRXV’s attempt to read the sitemap:</li>
<li>I fixed nginx so it always allows people to get the sitemap and then re-ran the plugins… now it’s checking 180,000+ handles to see if they are collections or items…
<ul>
<li>I see it fetched the sitemap three times, we need to make sure it’s only doing it once for each repository</li>
</ul>
</li>
<li>According to the api logs we will be adding 5,697 items:</li>
<li>Spent a few hours with Moayad troubleshooting and improving OpenRXV
<ul>
<li>We found a bug in the harvesting code that can occur when you are harvesting DSpace 5 and DSpace 6 instances, as DSpace 5 uses numeric (long) IDs, and DSpace 6 uses UUIDs</li>
</ul>
</li>
</ul>
<h2id="2021-06-25">2021-06-25</h2>
<ul>
<li>The new OpenRXV code creates almost 200,000 jobs when the plugins start
<ul>
<li>I figured out how to use <ahref="https://github.com/bee-queue/arena/tree/master/example">bee-queue/arena</a> to view our Bull job queue</li>
<li>Also, we can see the jobs directly using redis-cli:</li>
<li>We can apparently get the names of the jobs in each hash using <code>hget</code>:</li>
</ul>
<pre><codeclass="language-console"data-lang="console">127.0.0.1:6379> TYPE bull:plugins:401827
hash
127.0.0.1:6379> HGET bull:plugins:401827 name
"dspace_add_missing_items"
</code></pre><ul>
<li>I whipped up a one liner to get the keys for all plugin jobs, convert to redis <code>HGET</code> commands to extract the value of the name field, and then sort them by their counts:</li>
| sed -e 's/^bull/HGET bull/' -e 's/\([[:digit:]]\)$/\1 name/' \
| ncat -w 3 localhost 6379 \
| grep -v -E '^\$' | sort | uniq -c | sort -h
3 dspace_health_check
4 -ERR wrong number of arguments for 'hget' command
12 mel_downloads_and_views
129 dspace_altmetrics
932 dspace_downloads_and_views
186428 dspace_add_missing_items
</code></pre><ul>
<li>Note that this uses <code>ncat</code> to send commands directly to redis all at once instead of one at a time (<code>netcat</code> didn’t work here, as it doesn’t know when our input is finished and never quits)
<ul>
<li>I thought of using <code>redis-cli --pipe</code> but then you have to construct the commands in the redis protocol format with the number of args and length of each command</li>
</ul>
</li>
<li>There is clearly something wrong with the new DSpace health check plugin, as it creates WAY too many jobs every time we run the plugins</li>
</ul>
<h2id="2021-06-27">2021-06-27</h2>
<ul>
<li>Looking into the spike in PostgreSQL connections last week
<ul>
<li>I see the same things that I always see (large number of connections waiting for lock, large number of threads, high CPU usage, etc), but I also see almost 10,000 DSpace sessions on 2021-06-25</li>
<li>Annoyingly I found 37,000 more hits from Bing using <code>dns:*msnbot* AND dns:*.msn.com.</code> as a Solr filter
<ul>
<li>WTF, they are using a normal user agent: <code>Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko</code></li>
<li>I will purge the IPs and add this user agent to the nginx config so that we can rate limit it</li>
</ul>
</li>
<li>I signed up for Bing Webmaster Tools and verified cgspace.cgiar.org with the BingSiteAuth.xml file
<ul>
<li>Also I adjusted the nginx config to explicitly allow access to <code>robots.txt</code> even when bots are rate limited</li>
<li>Also I found that Bing was auto discovering all our RSS and Atom feeds as “sitemaps” so I deleted 750 of them and submitted the real sitemap</li>
<li>I need to see if I can adjust the nginx config further to map the <code>bot</code> user agent to DNS like msnbot…</li>
</ul>
</li>
<li>Review Abdullah’s filter on click pull request
<ul>
<li>I rebased his code on the latest master branch and tested adding filter on click to the map and list components, and it works fine</li>
<li>There seems to be a bug that breaks scrolling on the page though…</li>
<li>Abdullah fixed the bug in the filter on click branch</li>
</ul>
</li>
</ul>
<h2id="2021-06-28">2021-06-28</h2>
<ul>
<li>Some work on OpenRXV
<ul>
<li>Integrate <code>prettier</code> into the frontend and backend and format everything on the <code>master</code> branch</li>
<li>Re-work the GitHub Actions workflow for frontend and add one for backend</li>
<li>The workflows run <code>npm install</code> to test dependencies, and <code>npm ci</code> with <code>prettier</code> to check formatting</li>
<li>Also I merged Abdallah’s filter on click pull request</li>
</ul>
</li>
</ul>
<h2id="2021-06-30">2021-06-30</h2>
<ul>
<li>CGSpace is showing a blank white page…
<ul>
<li>The status is HTTP 200, but it’s blank white… so UptimeRobot didn’t send a notification!</li>
</ul>
</li>
<li>The DSpace log shows:</li>
</ul>
<pre><codeclass="language-console"data-lang="console">2021-06-30 08:19:15,874 ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper @ Cannot get a connection, pool error Timeout waiting for idle object
</code></pre><ul>
<li>The first one of these I see is from last night at 2021-06-29 at 10:47 PM</li>
<li>I restarted Tomcat 7 and CGSpace came back up…</li>
<li>I didn’t see that Atmire had responded last week (on 2021-06-23) about the issues we had
<ul>
<li>He said they had to do the same thing that they did last time: switch to the postgres user and kill all activity</li>
<li>He said they found tons of connections to the REST API, like 3-4 per second, and asked if that was normal</li>
<li>I pointed him to our Tomcat server.xml configuration, saying that we purposefully isolated the Tomcat connection pools between the API and XMLUI for this purpose…</li>
</ul>
</li>
<li>Export a list of all CGSpace’s AGROVOC keywords with counts for Enrico and Elizabeth Arnaud to discuss with AGROVOC:</li>
</ul>
<pre><codeclass="language-console"data-lang="console">localhost/dspace63= > \COPY (SELECT DISTINCT text_value AS "dcterms.subject", count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id = 187 GROUP BY "dcterms.subject" ORDER BY count DESC) to /tmp/2021-06-30-agrovoc.csv WITH CSV HEADER;
COPY 20780
</code></pre><ul>
<li>Actually Enrico wanted NON AGROVOC, so I extracted all the center and CRP subjects (ignoring system office and themes):</li>
</ul>
<pre><codeclass="language-console"data-lang="console">localhost/dspace63= > \COPY (SELECT DISTINCT LOWER(text_value) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242) GROUP BY subject ORDER BY count DESC) to /tmp/2021-06-30-non-agrovoc.csv WITH CSV HEADER;
COPY 1710
</code></pre><ul>
<li>Fix an issue in the Ansible infrastructure playbooks for the DSpace role
<ul>
<li>It was causing the template module to fail when setting up the npm environment</li>
<li>We needed to install <code>acl</code> so that Ansible can use <code>setfacl</code> on the target file before becoming an unprivileged user</li>
</ul>
</li>
<li>I saw a strange message in the Tomcat 7 journal on DSpace Test (linode26):</li>
</ul>
<pre><codeclass="language-console"data-lang="console">Jun 30 16:00:09 linode26 tomcat7[30294]: WARNING: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [111,733] milliseconds.
</code></pre><ul>
<li>What’s even crazier is that it is twice that on CGSpace (linode18)!</li>
<li>Apparently OpenJDK defaults to using <code>/dev/random</code> (see <code>/etc/java-8-openjdk/security/java.security</code>):</li>
<li><code>/dev/random</code> blocks and can take a long time to get entropy, and urandom on modern Linux is a cryptographically secure pseudorandom number generator
<ul>
<li>Now Tomcat starts much faster and no warning is printed so I’m going to add this to our Ansible infrastructure playbooks</li>
</ul>
</li>
<li>Interesting resource about the lore behind the <code>/dev/./urandom</code> workaround that is posted all over the Internet, apparently due to a bug in early JVMs: <ahref="https://bugs.java.com/bugdatabase/view_bug.do?bug_id=6202721">https://bugs.java.com/bugdatabase/view_bug.do?bug_id=6202721</a></li>
<li>I’m experimenting with using PgBouncer for pooling instead of Tomcat’s JDBC</li>
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.