Atmire responded about the issue with duplicate data in our Solr statistics
They noticed that some records in the statistics-2015 core haven’t been migrated with the AtomicStatisticsUpdateCLI tool yet and assumed that I haven’t migrated any of the records yet
That’s strange, as I checked all ten cores and 2015 is the only one with some unmigrated documents, as according to the cua_version field
I started processing those (about 411,000 records):
Atmire responded about the issue with duplicate data in our Solr statistics
They noticed that some records in the statistics-2015 core haven’t been migrated with the AtomicStatisticsUpdateCLI tool yet and assumed that I haven’t migrated any of the records yet
That’s strange, as I checked all ten cores and 2015 is the only one with some unmigrated documents, as according to the cua_version field
I started processing those (about 411,000 records):
<li>Atmire responded about the issue with duplicate data in our Solr statistics
<ul>
<li>They noticed that some records in the statistics-2015 core haven’t been migrated with the AtomicStatisticsUpdateCLI tool yet and assumed that I haven’t migrated any of the records yet</li>
<li>That’s strange, as I checked all ten cores and 2015 is the only one with some unmigrated documents, as according to the <code>cua_version</code> field</li>
<li>I started processing those (about 411,000 records):</li>
<li>AReS went down when the <code>renew-letsencrypt</code> service stopped the <code>angular_nginx</code> container in the pre-update hook and failed to bring it back up
<ul>
<li>I ran all system updates on the host and rebooted it and AReS came back up OK</li>
<li>Udana emailed me yesterday to ask why the CGSpace usage statistics were showing “No Data”
<ul>
<li>I noticed a message in the Solr Admin UI that one of the statistics cores failed to load, but it is up and I can query it…</li>
<li>Nevertheless, I restarted Tomcat a few times to see if all cores would come up without an error message, but had no success (despite that all cores ARE up and I can query them, <em>sigh</em>)</li>
<li>I think I will move all the Solr yearly statistics back into the main statistics core</li>
</ul>
</li>
<li>Start testing export/import of yearly Solr statistics data into the main statistics core on DSpace Test, for example:</li>
</ul>
<pre><code>$ ./run.sh -s http://localhost:8081/solr/statistics-2010 -a export -o statistics-2010.json -k uid
$ ./run.sh -s http://localhost:8081/solr/statistics -a import -o statistics-2010.json -k uid
<li>I deployed Tomcat 7.0.107 on DSpace Test (CGSpace is still Tomcat 7.0.104)</li>
<li>I finished migrating all the statistics from the yearly shards back to the main core</li>
</ul>
<h2id="2020-12-05">2020-12-05</h2>
<ul>
<li>I deleted all the yearly statistics shards and restarted Tomcat on DSpace Test (linode26)</li>
</ul>
<h2id="2020-12-06">2020-12-06</h2>
<ul>
<li>Looking into the statistics on DSpace Test after I migrated them back to the main core
<ul>
<li>All stats are working as expected… indexing time for the DSpace Statistics API is the same… and I don’t even see a difference in the JVM or memory stats in Munin other than a minor jump last week when I was processing them</li>
</ul>
</li>
<li>I will migrate them on CGSpace too I think
<ul>
<li>First I will start with the statistics-2010 and statistics-2015 cores because they were the ones that were failing to load recently (despite actually being available in Solr WTF)</li>
</ul>
</li>
</ul>
<p><imgsrc="/cgspace-notes/2020/12/solr-statistics-2010-failed.png"alt="Error message in Solr admin UI about the statistics-2010 core failing to load"></p>
<li>I will migrate all these cores and see if it makes a difference, then probably end up migrating all of them
<ul>
<li>I removed the statistics-2010, statistics-2015, statistics-2016, and statistics-2018 cores and restarted Tomcat and <em>all the statistics cores came up OK and the CUA statistics are OK</em>!</li>
<li>Run <code>dspace cleanup -v</code> on CGSpace to clean up deleted bitstreams</li>
<li>Atmire sent a <ahref="https://github.com/ilri/DSpace/pull/457">pull request</a> to address the duplicate owningComm and owningColl
<ul>
<li>Built and deployed it on DSpace Test but I am not sure how to run it yet</li>
<li>I sent feedback to Atmire on their tracker: <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=839">https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=839</a></li>
</ul>
</li>
<li>Abenet and Tezira are having issues with committing to the archive in their workflow
<ul>
<li>I looked at the server and indeed the locks and transactions are back up:</li>
<li>There are apparently 1,700 locks right now:</li>
</ul>
<pre><codeclass="language-console"data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
1739
</code></pre><h2id="2020-12-08">2020-12-08</h2>
<ul>
<li>Atmire sent some instructions for using the DeduplicateValuesProcessor
<ul>
<li>I modified <code>atmire-cua-update.xml</code> as they instructed, but I get a million errors like this when I run AtomicStatisticsUpdateCLI with that configuration:</li>
</ul>
</li>
</ul>
<pre><code>Record uid: 64387815-d9a7-4605-8024-1c0a5c7520e0 couldn't be processed
com.atmire.statistics.util.update.atomic.ProcessingException: something went wrong while processing record uid: 64387815-d9a7-4605-8024-1c0a5c7520e0, an error occured in the com.atmire.statistics.util.update.atomic.processor.DeduplicateValuesProcessor
at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdater.applyProcessors(SourceFile:304)
at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdater.processRecords(SourceFile:176)
at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdater.performRun(SourceFile:161)
at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdater.update(SourceFile:128)
at com.atmire.statistics.util.update.atomic.AtomicStatisticsUpdateCLI.main(SourceFile:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:229)
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:81)
<li>They responded with an updated CUA (6.x-4.1.10-ilri-RC7) that has a fix for the duplicates processor <em>and</em> a possible fix for the database locking issues (a bug in CUASolrLoggerServiceImpl that causes an infinite loop and a Tomcat timeout)</li>
<li>I deployed the changes on DSpace Test and CGSpace, hopefully it will fix both issues!</li>
</ul>
</li>
<li>In other news, after I restarted Tomcat on CGSpace the statistics-2013 core didn’t come back up properly, so I exported it and imported it into the main statistics core like I did for the others a few days ago</li>
<li>Sync DSpace Test with CGSpace’s Solr, PostgreSQL database, and assetstore…</li>
</ul>
<h2id="2020-12-09">2020-12-09</h2>
<ul>
<li>I was running the AtomicStatisticsUpdateCLI to remove duplicates on DSpace Test but it failed near the end of the statistics core (after 20 hours or so) with a memory error:</li>
</ul>
<pre><code>Successfully finished updating Solr Storage Reports | Wed Dec 09 15:25:11 CET 2020