mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2024-11-26 16:38:19 +01:00
676 lines
32 KiB
HTML
676 lines
32 KiB
HTML
<!DOCTYPE html>
|
|
<html lang="en" >
|
|
|
|
<head>
|
|
<meta charset="utf-8">
|
|
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
|
|
|
|
|
|
<meta property="og:title" content="April, 2021" />
|
|
<meta property="og:description" content="2021-04-01
|
|
|
|
I wrote a script to query Sherpa’s API for our ISSNs: sherpa-issn-lookup.py
|
|
|
|
I’m curious to see how the results compare with the results from Crossref yesterday
|
|
|
|
|
|
AReS Explorer was down since this morning, I didn’t see anything in the systemd journal
|
|
|
|
I simply took everything down with docker-compose and then back up, and then it was OK
|
|
Perhaps one of the containers crashed, I should have looked closer but I was in a hurry
|
|
|
|
|
|
" />
|
|
<meta property="og:type" content="article" />
|
|
<meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2021-04/" />
|
|
<meta property="article:published_time" content="2021-04-01T09:50:54+03:00" />
|
|
<meta property="article:modified_time" content="2021-04-06T22:48:44+03:00" />
|
|
|
|
|
|
|
|
<meta name="twitter:card" content="summary"/>
|
|
<meta name="twitter:title" content="April, 2021"/>
|
|
<meta name="twitter:description" content="2021-04-01
|
|
|
|
I wrote a script to query Sherpa’s API for our ISSNs: sherpa-issn-lookup.py
|
|
|
|
I’m curious to see how the results compare with the results from Crossref yesterday
|
|
|
|
|
|
AReS Explorer was down since this morning, I didn’t see anything in the systemd journal
|
|
|
|
I simply took everything down with docker-compose and then back up, and then it was OK
|
|
Perhaps one of the containers crashed, I should have looked closer but I was in a hurry
|
|
|
|
|
|
"/>
|
|
<meta name="generator" content="Hugo 0.82.0" />
|
|
|
|
|
|
|
|
<script type="application/ld+json">
|
|
{
|
|
"@context": "http://schema.org",
|
|
"@type": "BlogPosting",
|
|
"headline": "April, 2021",
|
|
"url": "https://alanorth.github.io/cgspace-notes/2021-04/",
|
|
"wordCount": "2530",
|
|
"datePublished": "2021-04-01T09:50:54+03:00",
|
|
"dateModified": "2021-04-06T22:48:44+03:00",
|
|
"author": {
|
|
"@type": "Person",
|
|
"name": "Alan Orth"
|
|
},
|
|
"keywords": "Notes"
|
|
}
|
|
</script>
|
|
|
|
|
|
|
|
<link rel="canonical" href="https://alanorth.github.io/cgspace-notes/2021-04/">
|
|
|
|
<title>April, 2021 | CGSpace Notes</title>
|
|
|
|
|
|
<!-- combined, minified CSS -->
|
|
|
|
<link href="https://alanorth.github.io/cgspace-notes/css/style.beb8012edc08ba10be012f079d618dc243812267efe62e11f22fe49618f976a4.css" rel="stylesheet" integrity="sha256-vrgBLtwIuhC+AS8HnWGNwkOBImfv5i4R8i/klhj5dqQ=" crossorigin="anonymous">
|
|
|
|
|
|
<!-- minified Font Awesome for SVG icons -->
|
|
|
|
<script defer src="https://alanorth.github.io/cgspace-notes/js/fontawesome.min.ffbfea088a9a1666ec65c3a8cb4906e2a0e4f92dc70dbbf400a125ad2422123a.js" integrity="sha256-/7/qCIqaFmbsZcOoy0kG4qDk+S3HDbv0AKElrSQiEjo=" crossorigin="anonymous"></script>
|
|
|
|
<!-- RSS 2.0 feed -->
|
|
|
|
|
|
|
|
|
|
</head>
|
|
|
|
<body>
|
|
|
|
|
|
<div class="blog-masthead">
|
|
<div class="container">
|
|
<nav class="nav blog-nav">
|
|
<a class="nav-link " href="https://alanorth.github.io/cgspace-notes/">Home</a>
|
|
</nav>
|
|
</div>
|
|
</div>
|
|
|
|
|
|
|
|
|
|
<header class="blog-header">
|
|
<div class="container">
|
|
<h1 class="blog-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/" rel="home">CGSpace Notes</a></h1>
|
|
<p class="lead blog-description" dir="auto">Documenting day-to-day work on the <a href="https://cgspace.cgiar.org">CGSpace</a> repository.</p>
|
|
</div>
|
|
</header>
|
|
|
|
|
|
|
|
|
|
<div class="container">
|
|
<div class="row">
|
|
<div class="col-sm-8 blog-main">
|
|
|
|
|
|
|
|
|
|
<article class="blog-post">
|
|
<header>
|
|
<h2 class="blog-post-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/2021-04/">April, 2021</a></h2>
|
|
<p class="blog-post-meta">
|
|
<time datetime="2021-04-01T09:50:54+03:00">Thu Apr 01, 2021</time>
|
|
in
|
|
<span class="fas fa-folder" aria-hidden="true"></span> <a href="/cgspace-notes/categories/notes/" rel="category tag">Notes</a>
|
|
|
|
|
|
</p>
|
|
</header>
|
|
<h2 id="2021-04-01">2021-04-01</h2>
|
|
<ul>
|
|
<li>I wrote a script to query Sherpa’s API for our ISSNs: <code>sherpa-issn-lookup.py</code>
|
|
<ul>
|
|
<li>I’m curious to see how the results compare with the results from Crossref yesterday</li>
|
|
</ul>
|
|
</li>
|
|
<li>AReS Explorer was down since this morning, I didn’t see anything in the systemd journal
|
|
<ul>
|
|
<li>I simply took everything down with docker-compose and then back up, and then it was OK</li>
|
|
<li>Perhaps one of the containers crashed, I should have looked closer but I was in a hurry</li>
|
|
</ul>
|
|
</li>
|
|
</ul>
|
|
<h2 id="2021-04-03">2021-04-03</h2>
|
|
<ul>
|
|
<li>Biruk from ICT contacted me to say that some CGSpace users still can’t log in
|
|
<ul>
|
|
<li>I guess the CGSpace LDAP bind account is really still locked after last week’s reset</li>
|
|
<li>He fixed the account and then I was finally able to bind and query:</li>
|
|
</ul>
|
|
</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ ldapsearch -x -H ldaps://AZCGNEROOT2.CGIARAD.ORG:636/ -b "dc=cgiarad,dc=org" -D "cgspace-account" -W "(sAMAccountName=otheraccounttoquery)"
|
|
</code></pre><h2 id="2021-04-04">2021-04-04</h2>
|
|
<ul>
|
|
<li>Check the index aliases on AReS Explorer to make sure they are sane before starting a new harvest:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ curl -s 'http://localhost:9200/_alias/' | python -m json.tool | less
|
|
</code></pre><ul>
|
|
<li>Then set the <code>openrxv-items-final</code> index to read-only so we can make a backup:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ curl -X PUT "localhost:9200/openrxv-items-final/_settings" -H 'Content-Type: application/json' -d'{"settings": {"index.blocks.write": true}}'
|
|
{"acknowledged":true}%
|
|
$ curl -s -X POST http://localhost:9200/openrxv-items-final/_clone/openrxv-items-final-backup
|
|
{"acknowledged":true,"shards_acknowledged":true,"index":"openrxv-items-final-backup"}%
|
|
$ curl -X PUT "localhost:9200/openrxv-items-final/_settings" -H 'Content-Type: application/json' -d'{"settings": {"index.blocks.write": false}}'
|
|
</code></pre><ul>
|
|
<li>Then start a harvesting on AReS Explorer</li>
|
|
<li>Help Enrico get some 2020 statistics for the Roots, Tubers and Bananas (RTB) community on CGSpace
|
|
<ul>
|
|
<li>He was hitting <a href="https://github.com/ilri/OpenRXV/issues/66">a bug on AReS</a> and also he only needed stats for 2020, and AReS currently only gives all-time stats</li>
|
|
</ul>
|
|
</li>
|
|
<li>I cleaned up about 230 ISSNs on CGSpace in OpenRefine
|
|
<ul>
|
|
<li>I had exported them last week, then filtered for anything not looking like an ISSN with this GREL: <code>isNotNull(value.match(/^\p{Alnum}{4}-\p{Alnum}{4}$/))</code></li>
|
|
<li>Then I applied them on CGSpace with the <code>fix-metadata-values.py</code> script:</li>
|
|
</ul>
|
|
</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ ./ilri/fix-metadata-values.py -i /tmp/2021-04-01-ISSNs.csv -db dspace -u dspace -p 'fuuu' -f cg.issn -t 'correct' -m 253
|
|
</code></pre><ul>
|
|
<li>For now I only fixed obvious errors like “1234-5678.” and “e-ISSN: 1234-5678” etc, but there are still lots of invalid ones which need more manual work:
|
|
<ul>
|
|
<li>Too few characters</li>
|
|
<li>Too many characters</li>
|
|
<li>ISBNs</li>
|
|
</ul>
|
|
</li>
|
|
<li>Create the CGSpace community and collection structure for the new Accelerating Impacts of CGIAR Climate Research for Africa (AICCRA) and assign all workflow steps</li>
|
|
</ul>
|
|
<h2 id="2021-04-04-1">2021-04-04</h2>
|
|
<ul>
|
|
<li>The AReS Explorer harvesting from yesterday finished, and the results look OK, but actually the Elasticsearch indexes are messed up again:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ curl -s 'http://localhost:9200/_alias/' | python -m json.tool
|
|
{
|
|
"openrxv-items-final": {
|
|
"aliases": {}
|
|
},
|
|
"openrxv-items-temp": {
|
|
"aliases": {
|
|
"openrxv-items": {}
|
|
}
|
|
},
|
|
...
|
|
}
|
|
</code></pre><ul>
|
|
<li><code>openrxv-items</code> should be an alias of <code>openrxv-items-final</code>, not <code>openrxv-temp</code>… I will have to fix that manually</li>
|
|
<li>Enrico asked for more information on the RTB stats I gave him yesterday
|
|
<ul>
|
|
<li>I remembered (again) that we can’t filter Atmire’s CUA stats by date issued</li>
|
|
<li>To show, for example, views/downloads in the year 2020 for RTB issued in 2020, we would need to use the DSpace statistics API and post a list of IDs and a custom date range</li>
|
|
<li>I tried to do that here by exporting the RTB community and extracting the IDs for items issued in 2020:</li>
|
|
</ul>
|
|
</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ ~/dspace63/bin/dspace metadata-export -i 10568/80100 -f /tmp/rtb.csv
|
|
$ csvcut -c 'id,dcterms.issued,dcterms.issued[],dcterms.issued[en_US]' /tmp/rtb.csv | \
|
|
sed '1d' | \
|
|
csvsql --no-header --no-inference --query 'SELECT a AS id,COALESCE(b, "")||COALESCE(c, "")||COALESCE(d, "") AS issued FROM stdin' | \
|
|
csvgrep -c issued -m 2020 | \
|
|
csvcut -c id | \
|
|
sed '1d' | \
|
|
sort | \
|
|
uniq
|
|
</code></pre><ul>
|
|
<li>So I remember in the future, this basically does the following:
|
|
<ul>
|
|
<li>Use csvcut to extract the id and all date issued columns from the CSV</li>
|
|
<li>Use sed to remove the header so we can refer to the columns using default a, b, c instead of their real names (which are tricky to match due to special characters)</li>
|
|
<li>Use csvsql to concatenate the various date issued columns (coalescing where null)</li>
|
|
<li>Use csvgrep to filter items by date issued in 2020</li>
|
|
<li>Use csvcut to extract the id column</li>
|
|
<li>Use sed to delete the header row</li>
|
|
<li>Use sort and uniq to filter out any duplicate IDs (there were three)</li>
|
|
</ul>
|
|
</li>
|
|
<li>Then I have a list of 296 IDs for RTB items issued in 2020</li>
|
|
<li>I constructed a JSON file to post to the DSpace Statistics API:</li>
|
|
</ul>
|
|
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
|
|
<span style="color:#f92672">"limit"</span>: <span style="color:#ae81ff">100</span>,
|
|
<span style="color:#f92672">"page"</span>: <span style="color:#ae81ff">0</span>,
|
|
<span style="color:#f92672">"dateFrom"</span>: <span style="color:#e6db74">"2020-01-01T00:00:00Z"</span>,
|
|
<span style="color:#f92672">"dateTo"</span>: <span style="color:#e6db74">"2020-12-31T00:00:00Z"</span>,
|
|
<span style="color:#f92672">"items"</span>: [
|
|
<span style="color:#e6db74">"00358715-b70c-4fdd-aa55-730e05ba739e"</span>,
|
|
<span style="color:#e6db74">"004b54bb-f16f-4cec-9fbc-ab6c6345c43d"</span>,
|
|
<span style="color:#e6db74">"02fb7630-d71a-449e-b65d-32b4ea7d6904"</span>,
|
|
<span style="color:#960050;background-color:#1e0010">...</span>
|
|
]
|
|
}
|
|
</code></pre></div><ul>
|
|
<li>Then I submitted the file three times (changing the page parameter):</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ curl -s -d @/tmp/2020-items.txt https://cgspace.cgiar.org/rest/statistics/items | json_pp > /tmp/page1.json
|
|
$ curl -s -d @/tmp/2020-items.txt https://cgspace.cgiar.org/rest/statistics/items | json_pp > /tmp/page2.json
|
|
$ curl -s -d @/tmp/2020-items.txt https://cgspace.cgiar.org/rest/statistics/items | json_pp > /tmp/page3.json
|
|
</code></pre><ul>
|
|
<li>Then I extracted the views and downloads in the most ridiculous way:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ grep views /tmp/page*.json | grep -o -E '[0-9]+$' | sed 's/,//' | xargs | sed -e 's/ /+/g' | bc
|
|
30364
|
|
$ grep downloads /tmp/page*.json | grep -o -E '[0-9]+,' | sed 's/,//' | xargs | sed -e 's/ /+/g' | bc
|
|
9100
|
|
</code></pre><ul>
|
|
<li>For curiousity I did the same exercise for items issued in 2019 and got the following:
|
|
<ul>
|
|
<li>Views: 30721</li>
|
|
<li>Downloads: 10205</li>
|
|
</ul>
|
|
</li>
|
|
</ul>
|
|
<h2 id="2021-04-06">2021-04-06</h2>
|
|
<ul>
|
|
<li>Margarita from CCAFS was having problems deleting an item from CGSpace again
|
|
<ul>
|
|
<li>The error was “Authorization denied for action OBSOLETE (DELETE) on BITSTREAM:bd157345-448e …”</li>
|
|
<li>This is the same issue as last month</li>
|
|
</ul>
|
|
</li>
|
|
<li>Create a new collection on CGSpace for a new CIP project at Mishel Portilla’s request</li>
|
|
<li>I got a notice that CGSpace was down
|
|
<ul>
|
|
<li>I didn’t see anything strange at first, but there are an insane amount of database connections:</li>
|
|
</ul>
|
|
</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
12413
|
|
</code></pre><ul>
|
|
<li>The system journal shows thousands of these messages in the system journal, this is the first one:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">Apr 06 07:52:13 linode18 tomcat7[556]: Apr 06, 2021 7:52:13 AM org.apache.tomcat.jdbc.pool.ConnectionPool abandon
|
|
</code></pre><ul>
|
|
<li>Around that time in the dspace log I see nothing unusual, but maybe these?</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">2021-04-06 07:52:29,409 INFO com.atmire.dspace.cua.CUASolrLoggerServiceImpl @ Updating : 200/127 docs in http://localhost:8081/solr/statistics
|
|
</code></pre><ul>
|
|
<li>(BTW what is the deal with the “200/127”? I should send a comment to Atmire)
|
|
<ul>
|
|
<li>I file a ticket with Atmire: <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-tickets">https://tracker.atmire.com/tickets-cgiar-ilri/view-tickets</a></li>
|
|
</ul>
|
|
</li>
|
|
<li>I restarted the PostgreSQL and Tomcat services and now I see less connections, but still WAY high:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
3640
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
2968
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
13
|
|
</code></pre><ul>
|
|
<li>After ten minutes or so it went back down…</li>
|
|
<li>And now it’s back up in the thousands… I am seeing a lot of stuff in dspace log like this:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">2021-04-06 11:59:34,364 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717951
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717952
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717953
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717954
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717955
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717956
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717957
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717958
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717959
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717960
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717961
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717962
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717963
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717964
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717965
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717966
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717967
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717968
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717969
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717970
|
|
2021-04-06 11:59:34,365 INFO org.dspace.content.MetadataValueServiceImpl @ user.hidden@cgiar.org:session_id=65F32E67CE8E347F64EFB5EB4E349B9B:delete_metadata_value: metadata_value_id=5717971
|
|
</code></pre><ul>
|
|
<li>I sent some notes and a log to Atmire on our existing issue about the database stuff
|
|
<ul>
|
|
<li>Also I asked them about the possibility of doing a formal review of Hibernate</li>
|
|
</ul>
|
|
</li>
|
|
<li>Falcon 3.0.0 was released so I updated the 3.0.0 branch for dspace-statistics-api and merged it to <code>v6_x</code>
|
|
<ul>
|
|
<li>I also fixed one minor (unrelated) bug in the tests</li>
|
|
<li>Then I deployed the new version on DSpace Test</li>
|
|
</ul>
|
|
</li>
|
|
<li>I had a meeting with Peter and Abenet about CGSpace TODOs</li>
|
|
<li>CGSpace went down again and the PostgreSQL locks are through the roof:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
12154
|
|
</code></pre><ul>
|
|
<li>I don’t see any activity on REST API, but in the last four hours there have been 3,500 DSpace sessions:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console"># grep -a -E '2021-04-06 (13|14|15|16|17):' /home/cgspace.cgiar.org/log/dspace.log.2021-04-06 | grep -o -E 'session_id=[A-Z0-9]{32}' | sort | uniq | wc -l
|
|
3547
|
|
</code></pre><ul>
|
|
<li>I looked at the same time of day for the past few weeks and it seems to be a normal number of sessions:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console"># for file in /home/cgspace.cgiar.org/log/dspace.log.2021-0{3,4}-*; do grep -a -E "2021-0(3|4)-[0-9]{2} (13|14|15|16|17):" "$file" | grep -o -E 'session_id=[A-Z0-9]{32}' | sort | uniq | wc -l; done
|
|
...
|
|
3572
|
|
4085
|
|
3476
|
|
3128
|
|
2949
|
|
2016
|
|
1839
|
|
4513
|
|
3463
|
|
4425
|
|
3328
|
|
2783
|
|
3898
|
|
3848
|
|
7799
|
|
255
|
|
534
|
|
2755
|
|
599
|
|
4463
|
|
3547
|
|
</code></pre><ul>
|
|
<li>What about total number of sessions per day?</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console"># for file in /home/cgspace.cgiar.org/log/dspace.log.2021-0{3,4}-*; do echo "$file:"; grep -a -o -E 'session_id=[A-Z0-9]{32}' "$file" | sort | uniq | wc -l; done
|
|
...
|
|
/home/cgspace.cgiar.org/log/dspace.log.2021-03-28:
|
|
11784
|
|
/home/cgspace.cgiar.org/log/dspace.log.2021-03-29:
|
|
15104
|
|
/home/cgspace.cgiar.org/log/dspace.log.2021-03-30:
|
|
19396
|
|
/home/cgspace.cgiar.org/log/dspace.log.2021-03-31:
|
|
32612
|
|
/home/cgspace.cgiar.org/log/dspace.log.2021-04-01:
|
|
26037
|
|
/home/cgspace.cgiar.org/log/dspace.log.2021-04-02:
|
|
14315
|
|
/home/cgspace.cgiar.org/log/dspace.log.2021-04-03:
|
|
12530
|
|
/home/cgspace.cgiar.org/log/dspace.log.2021-04-04:
|
|
13138
|
|
/home/cgspace.cgiar.org/log/dspace.log.2021-04-05:
|
|
16756
|
|
/home/cgspace.cgiar.org/log/dspace.log.2021-04-06:
|
|
12343
|
|
</code></pre><ul>
|
|
<li>So it’s not the number of sessions… it’s something with the workload…</li>
|
|
<li>I had to step away for an hour or so and when I came back the site was still down and there were still 12,000 locks
|
|
<ul>
|
|
<li>I restarted postgresql and tomcat7…</li>
|
|
</ul>
|
|
</li>
|
|
<li>The locks in PostgreSQL shot up again…</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
3447
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
3527
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
4582
|
|
</code></pre><ul>
|
|
<li>I don’t know what the hell is going on, but the PostgreSQL connections and locks are way higher than ever before:</li>
|
|
</ul>
|
|
<p><img src="/cgspace-notes/2021/04/postgres_connections_cgspace-week.png" alt="PostgreSQL connections week">
|
|
<img src="/cgspace-notes/2021/04/postgres_locks_cgspace-week.png" alt="PostgreSQL locks week">
|
|
<img src="/cgspace-notes/2021/04/jmx_tomcat_dbpools-week.png" alt="Tomcat database pool"></p>
|
|
<ul>
|
|
<li>Otherwise, the number of DSpace sessions is completely normal:</li>
|
|
</ul>
|
|
<p><img src="/cgspace-notes/2021/04/jmx_dspace_sessions-week.png" alt="DSpace sessions"></p>
|
|
<ul>
|
|
<li>While looking at the nginx logs I see that MEL is trying to log into CGSpace’s REST API and delete items:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">34.209.213.122 - - [06/Apr/2021:03:50:46 +0200] "POST /rest/login HTTP/1.1" 401 727 "-" "MEL"
|
|
34.209.213.122 - - [06/Apr/2021:03:50:48 +0200] "DELETE /rest/items/95f52bf1-f082-4e10-ad57-268a76ca18ec/metadata HTTP/1.1" 401 704 "-" "-"
|
|
</code></pre><ul>
|
|
<li>I see a few of these per day going back several months
|
|
<ul>
|
|
<li>I sent a message to Salem and Enrico to ask if they know</li>
|
|
</ul>
|
|
</li>
|
|
<li>Also annoying, I see tons of what look like penetration testing requests from Qualys:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">2021-04-04 06:35:17,889 INFO org.dspace.authenticate.LDAPAuthentication @ anonymous:session_id=FF1E051BCA7D81CC5A807D85380D81E5:ip_addr=64.39.108.48:failed_login:no DN found for user "'><qss a=X158062356Y1_2Z>
|
|
2021-04-04 06:35:17,889 INFO org.dspace.authenticate.PasswordAuthentication @ anonymous:session_id=FF1E051BCA7D81CC5A807D85380D81E5:ip_addr=64.39.108.48:authenticate:attempting password auth of user="'><qss a=X158062356Y1_2Z>
|
|
2021-04-04 06:35:17,890 INFO org.dspace.app.xmlui.utils.AuthenticationUtil @ anonymous:session_id=FF1E051BCA7D81CC5A807D85380D81E5:ip_addr=64.39.108.48:failed_login:email="'><qss a=X158062356Y1_2Z>, realm=null, result=2
|
|
2021-04-04 06:35:18,145 INFO org.dspace.authenticate.LDAPAuthentication @ anonymous:session_id=FF1E051BCA7D81CC5A807D85380D81E5:ip_addr=64.39.108.48:auth:attempting trivial auth of user=was@qualys.com
|
|
2021-04-04 06:35:18,519 INFO org.dspace.authenticate.LDAPAuthentication @ anonymous:session_id=FF1E051BCA7D81CC5A807D85380D81E5:ip_addr=64.39.108.48:failed_login:no DN found for user was@qualys.com
|
|
2021-04-04 06:35:18,520 INFO org.dspace.authenticate.PasswordAuthentication @ anonymous:session_id=FF1E051BCA7D81CC5A807D85380D81E5:ip_addr=64.39.108.48:authenticate:attempting password auth of user=was@qualys.com
|
|
</code></pre><ul>
|
|
<li>I deleted the ilri/AReS repository on GitHub since we haven’t updated it in two years
|
|
<ul>
|
|
<li>All development is happening in <a href="https://github.com/ilri/openRXV">https://github.com/ilri/openRXV</a> now</li>
|
|
</ul>
|
|
</li>
|
|
<li>10PM and the server is down again, with locks through the roof:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
12198
|
|
</code></pre><ul>
|
|
<li>I see that there are tons of PostgreSQL connections getting abandoned today, compared to very few in the past few weeks:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ journalctl -u tomcat7 --since=today | grep -c 'ConnectionPool abandon'
|
|
1838
|
|
$ journalctl -u tomcat7 --since=2021-03-20 --until=2021-04-05 | grep -c 'ConnectionPool abandon'
|
|
3
|
|
</code></pre><ul>
|
|
<li>I even restarted the server and connections were low for a few minutes until they shot back up:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
13
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
8651
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
8940
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
10504
|
|
</code></pre><ul>
|
|
<li>I had to go to bed and I bet it will crash and be down for hours until I wake up…</li>
|
|
<li>What the hell is this user agent?</li>
|
|
</ul>
|
|
<pre><code>54.197.119.143 - - [06/Apr/2021:19:18:11 +0200] "GET /handle/10568/16499 HTTP/1.1" 499 0 "-" "GetUrl/1.0 wdestiny@umich.edu (Linux)"
|
|
</code></pre><h2 id="2021-04-07">2021-04-07</h2>
|
|
<ul>
|
|
<li>CGSpace was still down from last night of course, with tons of database locks:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
12168
|
|
</code></pre><ul>
|
|
<li>I restarted the server again and the locks came back</li>
|
|
<li>Atmire responded to the message from yesterday
|
|
<ul>
|
|
<li>The noticed something in the logs about emails failing to be sent</li>
|
|
<li>There appears to be an issue sending mails on workflow tasks when a user in that group has an invalid email address:</li>
|
|
</ul>
|
|
</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">2021-04-01 12:45:11,414 WARN org.dspace.workflowbasic.BasicWorkflowServiceImpl @ a.akwarandu@cgiar.org:session_id=2F20F20D4A8C36DB53D42DE45DFA3CCE:notifyGroupofTask:cannot email user group_id=aecf811b-b7e9-4b6f-8776-3d372e6a048b workflow_item_id=33085\colon; Invalid Addresses (com.sun.mail.smtp.SMTPAddressFailedException\colon; 501 5.1.3 Invalid address
|
|
</code></pre><ul>
|
|
<li>The issue is not the named user above, but a member of the group…</li>
|
|
<li>And the group does have users with invalid email addresses (probably accounts created automatically after authenticating with LDAP):</li>
|
|
</ul>
|
|
<p><img src="/cgspace-notes/2021/04/group-invalid-email.png" alt="DSpace group"></p>
|
|
<ul>
|
|
<li>I extracted all the group IDs from recent logs that had users with invalid email addresses:</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ grep -a -E 'email user group_id=\b[0-9a-f]{8}\b-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-\b[0-9a-f]{12}\b' /home/cgspace.cgiar.org/log/dspace.log.* | grep -o -E '\b[0-9a-f]{8}\b-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-\b[0-9a-f]{12}\b' | sort | uniq
|
|
0a30d6ae-74a6-4eee-a8f5-ee5d15192ee6
|
|
1769137c-36d4-42b2-8fec-60585e110db7
|
|
203c8614-8a97-4ac8-9686-d9d62cb52acc
|
|
294603de-3d09-464e-a5b0-09e452c6b5ab
|
|
35878555-9623-4679-beb8-bb3395fdf26e
|
|
3d8a5efa-5509-4bf9-9374-2bc714aceb99
|
|
4238208a-f848-47cb-9dd2-43f9f954a4af
|
|
44939b84-1894-41e7-b3e6-8c8d1781057b
|
|
49ba087e-75a3-45ce-805c-69eeda0f786b
|
|
4a6606ce-0284-421d-bf80-4dafddba2d42
|
|
527de6aa-9cd0-4988-bf5f-c9c92ba2ac10
|
|
54cd1b16-65bf-4041-9d84-fb2ea3301d6d
|
|
58982847-5f7c-4b8b-a7b0-4d4de702136e
|
|
5f0b85be-bd23-47de-927d-bca368fa1fbc
|
|
646ada17-e4ef-49f6-9378-af7e58596ce1
|
|
7e2f4bf8-fbc9-4b2f-97a4-75e5427bef90
|
|
8029fd53-f9f5-4107-bfc3-8815507265cf
|
|
81faa934-c602-4608-bf45-de91845dfea7
|
|
8611a462-210c-4be1-a5bb-f87a065e6113
|
|
8855c903-ef86-433c-b0be-c12300eb0f84
|
|
8c7ece98-3598-4de7-a885-d61fd033bea8
|
|
8c9a0d01-2d12-4a99-84f9-cdc25ac072f9
|
|
8f9f888a-b501-41f3-a462-4da16150eebf
|
|
94168f0e-9f45-4112-ac8d-3ba9be917842
|
|
96998038-f381-47dc-8488-ff7252703627
|
|
9768f4a8-3018-44e9-bf58-beba4296327c
|
|
9a99e8d2-558e-4fc1-8011-e4411f658414
|
|
a34e6400-78ed-45c0-a751-abc039eed2e6
|
|
a9da5af3-4ec7-4a9b-becb-6e3d028d594d
|
|
abf5201c-8be5-4dee-b461-132203dd51cb
|
|
adb5658c-cef3-402f-87b6-b498f580351c
|
|
aecf811b-b7e9-4b6f-8776-3d372e6a048b
|
|
ba5aae61-ea34-4ac1-9490-4645acf2382f
|
|
bf7f3638-c7c6-4a8f-893d-891a6d3dafff
|
|
c617ada0-09d1-40ed-b479-1c4860a4f724
|
|
cff91d44-a855-458c-89e5-bd48c17d1a54
|
|
e65171ae-a2bf-4043-8f54-f8457bc9174b
|
|
e7098b40-4701-4ca2-b9a9-3a1282f67044
|
|
e904f122-71dc-439b-b877-313ef62486d7
|
|
ede59734-adac-4c01-8691-b45f19088d37
|
|
f88bd6bb-f93f-41cb-872f-ff26f6237068
|
|
f985f5fb-be5c-430b-a8f1-cf86ae4fc49a
|
|
fe800006-aaec-4f9e-9ab4-f9475b4cbdc3
|
|
</code></pre><h2 id="2021-04-08">2021-04-08</h2>
|
|
<ul>
|
|
<li>I can’t believe it but the server has been down for twelve hours or so
|
|
<ul>
|
|
<li>The locks have not changed since I went to bed last night:</li>
|
|
</ul>
|
|
</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
12070
|
|
</code></pre><ul>
|
|
<li>I restarted PostgreSQL and Tomcat and the locks go straight back up!</li>
|
|
</ul>
|
|
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
13
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
986
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
1194
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
1212
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
1489
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
2124
|
|
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
|
5934
|
|
</code></pre><h2 id="2021-04-09">2021-04-09</h2>
|
|
<ul>
|
|
<li>Atmire managed to get CGSpace back up by killing all the PostgreSQL connections yesterday
|
|
<ul>
|
|
<li>I don’t know how they did it…</li>
|
|
<li>They also think it’s weird that restarting PostgreSQL didn’t kill the connections</li>
|
|
<li>They asked some more questions, like for example if there were also issues on DSpace Test</li>
|
|
<li>Strangely enough, I checked DSpace Test and notice a clear spike in PostgreSQL locks on the morning of April 6th as well!</li>
|
|
</ul>
|
|
</li>
|
|
</ul>
|
|
<p><img src="/cgspace-notes/2021/04/postgres_locks_ALL-week-PROD.png" alt="PostgreSQL locks week CGSpace">
|
|
<img src="/cgspace-notes/2021/04/postgres_locks_ALL-week-TEST.png" alt="PostgreSQL locks week DSpace Test"></p>
|
|
<ul>
|
|
<li>I definitely need to look into that!</li>
|
|
</ul>
|
|
<!-- raw HTML omitted -->
|
|
|
|
|
|
|
|
|
|
|
|
</article>
|
|
|
|
|
|
|
|
</div> <!-- /.blog-main -->
|
|
|
|
<aside class="col-sm-3 ml-auto blog-sidebar">
|
|
|
|
|
|
|
|
<section class="sidebar-module">
|
|
<h4>Recent Posts</h4>
|
|
<ol class="list-unstyled">
|
|
|
|
|
|
<li><a href="/cgspace-notes/2021-04/">April, 2021</a></li>
|
|
|
|
<li><a href="/cgspace-notes/2021-03/">March, 2021</a></li>
|
|
|
|
<li><a href="/cgspace-notes/cgspace-cgcorev2-migration/">CGSpace CG Core v2 Migration</a></li>
|
|
|
|
<li><a href="/cgspace-notes/2021-02/">February, 2021</a></li>
|
|
|
|
<li><a href="/cgspace-notes/2021-01/">January, 2021</a></li>
|
|
|
|
</ol>
|
|
</section>
|
|
|
|
|
|
|
|
|
|
<section class="sidebar-module">
|
|
<h4>Links</h4>
|
|
<ol class="list-unstyled">
|
|
|
|
<li><a href="https://cgspace.cgiar.org">CGSpace</a></li>
|
|
|
|
<li><a href="https://dspacetest.cgiar.org">DSpace Test</a></li>
|
|
|
|
<li><a href="https://github.com/ilri/DSpace">CGSpace @ GitHub</a></li>
|
|
|
|
</ol>
|
|
</section>
|
|
|
|
</aside>
|
|
|
|
|
|
</div> <!-- /.row -->
|
|
</div> <!-- /.container -->
|
|
|
|
|
|
|
|
<footer class="blog-footer">
|
|
<p dir="auto">
|
|
|
|
Blog template created by <a href="https://twitter.com/mdo">@mdo</a>, ported to Hugo by <a href='https://twitter.com/mralanorth'>@mralanorth</a>.
|
|
|
|
</p>
|
|
<p>
|
|
<a href="#">Back to top</a>
|
|
</p>
|
|
</footer>
|
|
|
|
|
|
</body>
|
|
|
|
</html>
|