--- title: "November, 2020" date: 2020-11-01T13:11:54+02:00 author: "Alan Orth" categories: ["Notes"] --- ## 2020-11-01 - Continue with processing the statistics-2019 Solr core with the AtomicStatisticsUpdateCLI tool on DSpace Test - So far we've spent at least fifty hours to process the statistics and statistics-2019 core... wow. ## 2020-11-02 - Talk to Moayad and fix a few issues on OpenRXV: - Incorrect views and downloads (caused by Elasticsearch's default result set size of 10) - Invalid share link - Missing "https://" for Handles in the Simple Excel report (caused by using the `handle` instead of the `uri`) - Sorting the list of items by views - I resumed the processing of the statistics-2018 Solr core after it spent 20 hours to get to 60% ## 2020-11-04 - After 29 hours the statistics-2017 core finished processing so I started the statistics-2016 core on DSpace Test ## 2020-11-05 - Peter sent me corrections and deletions for the author affiliations - I quickly proofed them for UTF-8 issues in OpenRefine and csv-metadata-quality and then tested them locally and then applied them on CGSpace: ``` $ ./fix-metadata-values.py -i 2020-11-05-fix-862-affiliations.csv -db dspace -u dspace -p 'fuuu' -f cg.contributor.affiliation -t 'correct' -m 211 $ ./delete-metadata-values.py -i 2020-11-05-delete-29-affiliations.csv -db dspace -u dspace -p 'fuuu' -f cg.contributor.affiliation -m 211 ``` - Then I started a Discovery re-index on CGSpace: ``` $ time chrt -b 0 ionice -c2 -n7 nice -n19 dspace index-discovery -b real 92m24.993s user 8m11.858s sys 2m26.931s ``` ## 2020-11-06 - Restart the AtomicStatisticsUpdateCLI processing of the statistics-2016 core on DSpace Test after 20 hours... - This phase finished after five hours so I started it on the statistics-2015 core ## 2020-11-07 - Atmire responded about the issue with duplicate values in owningComm and containerCommunity etc - I told them to please look into it and use some of our credits if need be - The statistics-2015 core finished after 20 hours so I started the statistics-2014 core ## 2020-11-08 - Add "Data Paper" to types on CGSpace - Add "SCALING CLIMATE-SMART AGRICULTURE" to CCAFS subjects on CGSpace - Add "ANDEAN ROOTS AND TUBERS" to CIP subjects on CGSpace - Add CGIAR System subjects to Discovery sidebar facets on CGSpace - Also add the System subject to item view on CGSpace - The statistics-2014 core finished processing after five hours, so I started processing the statistics-2013 core on DSpace Test - Since I was going to restart CGSpace and update the Discovery indexes anyways I decided to check for any straggling upper case AGROVOC entries and lower case them: ``` dspace=# BEGIN; dspace=# UPDATE metadatavalue SET text_value=LOWER(text_value) WHERE resource_type_id=2 AND metadata_field_id=57 AND text_value ~ '[[:upper:]]'; UPDATE 164 dspace=# COMMIT; ``` - Run system updates on CGSpace (linode18) and reboot it - I had to restart Tomcat once after the machine started up to get all Solr statistics cores to load properly - After about ten more hours the rest of the Solr statistics cores finished processing on DSpace Test and I started optimizing them in Solr admin UI ## 2020-11-10 - I am noticing that CGSpace doesn't have any statistics showing for years before 2020, but all cores are loaded successfully in Solr Admin UI... strange - I restarted Tomcat and I see in Solr Admin UI that the statistics-2015 core failed to load - Looking in the DSpace log I see: ``` 2020-11-10 08:43:59,634 INFO org.dspace.statistics.SolrLogger @ Loading core with name: statistics-2015 2020-11-10 08:43:59,687 INFO org.dspace.statistics.SolrLogger @ Loading core with name: statistics-2018 2020-11-10 08:43:59,707 INFO org.dspace.statistics.SolrLogger @ Loading core with name: statistics-2015 2020-11-10 08:44:00,004 WARN org.dspace.core.ConfigurationManager @ Requested configuration module: atmire-datatables not found 2020-11-10 08:44:00,005 WARN org.dspace.core.ConfigurationManager @ Requested configuration module: atmire-datatables not found 2020-11-10 08:44:00,005 WARN org.dspace.core.ConfigurationManager @ Requested configuration module: atmire-datatables not found 2020-11-10 08:44:00,325 INFO org.dspace.statistics.SolrLogger @ Created core with name: statistics-2015 ``` - Seems that the core gets probed twice... perhaps a threading issue? - The only thing I can think of is the `acceptorThreadCount` parameter in Tomcat's server.xml, which has been set to 2 since 2018-01 (we started sharding the Solr statistics cores in 2019-01 and that's when this problem arose) - I will try reducing that to 1 - Wow, now it's even worse: ``` 2020-11-10 08:51:03,007 INFO org.dspace.statistics.SolrLogger @ Created core with name: statistics-2018 2020-11-10 08:51:03,008 INFO org.dspace.statistics.SolrLogger @ Loading core with name: statistics-2015 2020-11-10 08:51:03,137 INFO org.dspace.statistics.SolrLogger @ Loading core with name: statistics-2018 2020-11-10 08:51:03,153 INFO org.dspace.statistics.SolrLogger @ Loading core with name: statistics-2015 2020-11-10 08:51:03,289 INFO org.dspace.statistics.SolrLogger @ Created core with name: statistics-2015 2020-11-10 08:51:03,289 INFO org.dspace.statistics.SolrLogger @ Loading core with name: statistics-2010 2020-11-10 08:51:03,475 INFO org.dspace.statistics.SolrLogger @ Created core with name: statistics-2010 2020-11-10 08:51:03,475 INFO org.dspace.statistics.SolrLogger @ Loading core with name: statistics-2016 2020-11-10 08:51:03,730 INFO org.dspace.statistics.SolrLogger @ Created core with name: statistics-2016 2020-11-10 08:51:03,731 INFO org.dspace.statistics.SolrLogger @ Loading core with name: statistics-2017 2020-11-10 08:51:03,992 INFO org.dspace.statistics.SolrLogger @ Created core with name: statistics-2017 2020-11-10 08:51:03,992 INFO org.dspace.statistics.SolrLogger @ Loading core with name: statistics-2011 2020-11-10 08:51:04,178 INFO org.dspace.statistics.SolrLogger @ Created core with name: statistics-2011 2020-11-10 08:51:04,178 INFO org.dspace.statistics.SolrLogger @ Loading core with name: statistics-2012 ``` - Could it be because we have two Tomcat connectors? - I restarted Tomcat a few more times before all cores loaded, and still there are no stats before 2020-01... hmmmmm - I added a [lowercase formatter to OpenRXV](https://github.com/ilri/OpenRXV/commit/3816b9b3f3d9182d2ba1a899c1017c5895a59dee) so that we can lowercase AGROVOC subjects during harvesting ## 2020-11-11 - Atmire responded with a quote for the work to fix the duplicate owningComm, etc in our Solr data - I told them to proceed, as it's within our budget of credits - They will write a processor for DSpace 6 to remove the duplicates - I did some tests to add a usage statistics chart to the item views on DSpace Test - It is inspired by Salem's work on WorldFish's repository, and it hits the dspace-statistics-api for the current item and displays a graph - I got it working very easily for all-time statistics with Chart.js, but I think I will need to use Highcharts or something else because Chart.js is HTML5 canvas and doesn't allow theming via CSS (so our Bootstrap brand colors for each theme won't work) - Hmm, Highcharts is not licensed under and open source license so I will not use it - Perhaps I'll use Chartist with the popover plugin... - I think I'll pursue this after the DSpace 6 upgrade... ## 2020-11-12 - I was looking at Solr again trying to find a way to get community and collection stats by faceting on `owningComm` and `owningColl` and it seems to work actually - The duplicated values in the multi-value fields don't seem to affect the counts, as I had thought previously (though we should still get rid of them) - One major difference between the raw numbers I was looking at and Atmire's numbers is that Atmire's code filters "Internal" IP addresses... - Also, instead of doing `isBot:false` I think I should do `-isBot:true` because it's not a given that all documents will have this field and have it false, but we can definitely exclude the ones that have it as true - First we get the total number of communities with stats (using calcdistinct): ``` facet=true&facet.field=owningComm&facet.mincount=1&facet.limit=1&facet.offset=0&stats=true&stats.field=owningComm&stats.calcdistinct=true&shards=http://localhost:8081/solr/statistics,http://localhost:8081/solr/statistics-2019,http://localhost:8081/solr/statistics-2018,http://localhost:8081/solr/statistics-2017,http://localhost:8081/solr/statistics-2016,http://localhost:8081/solr/statistics-2015,http://localhost:8081/solr/statistics-2014,http://localhost:8081/solr/statistics-2013,http://localhost:8081/solr/statistics-2012,http://localhost:8081/solr/statistics-2011,http://localhost:8081/solr/statistics-2010 ``` - Then get stats themselves, iterating 100 items at a time with limit and offset: ``` facet=true&facet.field=owningComm&facet.mincount=1&facet.limit=100&facet.offset=0&shards=http://localhost:8081/solr/statistics,http://localhost:8081/solr/statistics-2019,http://localhost:8081/solr/statistics-2018,http://localhost:8081/solr/statistics-2017,http://localhost:8081/solr/statistics-2016,http://localhost:8081/solr/statistics-2015,http://localhost:8081/solr/statistics-2014,http://localhost:8081/solr/statistics-2013,http://localhost:8081/solr/statistics-2012,http://localhost:8081/solr/statistics-2011,http://localhost:8081/solr/statistics-2010 ``` - I was surprised to see 10,000,000 docs with `isBot:true` when I was testing on DSpace Test... - This has got to be a mistake of some kind, as I see 4 million in 2014 that are from `dns:localhost.`, perhaps that's when we didn't have useProxies set up correctly? - I don't see the same thing on CGSpace... I wonder what happened? - Perhaps they got re-tagged during the DSpace 6 upgrade, somehow during the Solr migration? Hmmmmm. Definitely have to be careful with `isBot:true` in the future and not automatically purge these!!! - I noticed 120,000+ hits from monit, FeedBurner, and Blackboard Safeassign in 2014, 2015, 2016, 2017, etc... - I hadn't seen monit before, but the others are already in DSpace's spider agents lists for some time so probably only appear in older stats cores - The issue with purging these using `check-spider-hits.sh` is that it can't do case-insensitive regexes and some metacharacters like `\s` don't work so I added case-sensitive patterns to a local agents file and purged them with the script ## 2020-11-15 - Upgrade CGSpace to DSpace 6.3 - First build, update, and migrate the database: ``` $ dspace cleanup -v $ git checkout origin/6_x-dev-atmire-modules $ npm install -g yarn $ chrt -b 0 mvn -U -Dmirage2.on=true -Dmirage2.deps.included=false -P \!dspace-lni,\!dspace-rdf,\!dspace-sword,\!dspace-swordv2,\!dspace-jspui clean package $ sudo su - postgres $ psql dspace -c 'CREATE EXTENSION pgcrypto;' $ psql dspace -c "DELETE FROM schema_version WHERE version IN ('5.8.2015.12.03.3');" $ exit $ rm -rf /home/cgspace/config/spring $ ant update $ dspace database info $ dspace database migrate $ sudo systemctl start tomcat7 ``` - After starting Tomcat DSpace should start up OK and begin Discovery indexing, but I want to also upgrade from PostgreSQL 9.6 to 10 - I installed and configured PostgreSQL 10 using the Ansible playbooks, then migrated the database manually: ``` # systemctl stop tomcat7 # pg_ctlcluster 9.6 main stop # tar -cvzpf var-lib-postgresql-9.6.tar.gz /var/lib/postgresql/9.6 # tar -cvzpf etc-postgresql-9.6.tar.gz /etc/postgresql/9.6 # pg_ctlcluster 10 main stop # pg_dropcluster 10 main # pg_upgradecluster 9.6 main # pg_dropcluster 9.6 main # systemctl start postgresql # dpkg -l | grep postgresql | grep 9.6 | awk '{print $2}' | xargs dpkg -r ``` - Then I ran all system updates and rebooted the server... - After the server came back up I re-ran the Ansible playbook to make sure all configs and services were updated - I disabled the dspace-statistsics-api for now because it won't work until I migrate all the Solr statistics anyways - Start a full Discovery re-indexing: ``` $ time chrt -b 0 ionice -c2 -n7 nice -n19 dspace index-discovery -b real 211m30.726s user 134m40.124s sys 2m17.979s ``` - Towards the end of the indexing there were a few dozen of these messages: ``` 2020-11-15 13:23:21,685 INFO com.atmire.dspace.discovery.service.AtmireSolrService @ Removed Item: null from Index ``` - I updated all the Ansible infrastructure and DSpace branches to be the DSpace 6 ones - I will wait until the Discovery indexing is finished to start doing the Solr statistics migration - I tested the email functionality and it seems to need more configuration: ``` $ dspace test-email About to send test email: - To: blah@cgiar.org - Subject: DSpace test email - Server: smtp.office365.com Error sending email: - Error: com.sun.mail.smtp.SMTPSendFailedException: 451 5.7.3 STARTTLS is required to send mail [AM4PR0701CA0003.eurprd07.prod.outlook.com] ``` - I copied the `mail.extraproperties = mail.smtp.starttls.enable=true` setting from the old DSpace 5 `dspace.cfg` and now the emails are working - After the Discovery indexing finished I started processing the Solr stats one core and 2.5 million records at a time: ``` $ export JAVA_OPTS='-Dfile.encoding=UTF-8 -Xmx2048m' $ chrt -b 0 dspace solr-upgrade-statistics-6x -n 2500000 -i statistics ``` - After about 6,000,000 records I got the same error that I've gotten every time I test this migration process: ``` Exception: Error while creating field 'p_group_id{type=uuid,properties=indexed,stored,multiValued}' from value '10' org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error while creating field 'p_group_id{type=uuid,properties=indexed,stored,multiValued}' from value '10' at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:552) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54) at org.dspace.util.SolrUpgradePre6xStatistics.batchUpdateStats(SolrUpgradePre6xStatistics.java:161) at org.dspace.util.SolrUpgradePre6xStatistics.run(SolrUpgradePre6xStatistics.java:456) at org.dspace.util.SolrUpgradePre6xStatistics.main(SolrUpgradePre6xStatistics.java:365) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:229) at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:81) ``` ## 2020-11-16 - Users are having issues submitting items to CGSpace - Looking at the data I see that connections skyrocketed since DSpace 6 upgrade yesterday, and they are all in "waiting for lock" state: ![PostgreSQL connections week](/cgspace-notes/2020/11/postgres_connections_ALL-week.png) ![PostgreSQL locks week](/cgspace-notes/2020/11/postgres_locks_ALL-week.png) - There are almost 1,500 locks: ``` $ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l 1494 ``` - I sent a mail to the dspace-tech mailing list to ask for help... - For now I just restarted PostgreSQL and a few users were able to complete submissions... - While processing the statistics-2018 Solr core I got the *same* memory error that I have gotten every time I processed this core in testing: ``` Exception: Java heap space java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:3332) at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) at java.lang.StringBuffer.append(StringBuffer.java:270) at java.io.StringWriter.write(StringWriter.java:101) at org.apache.solr.common.util.XML.writeXML(XML.java:133) at org.apache.solr.client.solrj.util.ClientUtils.writeVal(SourceFile:160) at org.apache.solr.client.solrj.util.ClientUtils.writeXML(SourceFile:128) at org.apache.solr.client.solrj.request.UpdateRequest.writeXML(UpdateRequest.java:365) at org.apache.solr.client.solrj.request.UpdateRequest.getXML(UpdateRequest.java:281) at org.apache.solr.client.solrj.request.RequestWriter.getContentStream(RequestWriter.java:67) at org.apache.solr.client.solrj.request.RequestWriter$LazyContentStream.getDelegate(RequestWriter.java:95) at org.apache.solr.client.solrj.request.RequestWriter$LazyContentStream.getName(RequestWriter.java:105) at org.apache.solr.client.solrj.impl.HttpSolrServer.createMethod(HttpSolrServer.java:302) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54) at org.dspace.util.SolrUpgradePre6xStatistics.batchUpdateStats(SolrUpgradePre6xStatistics.java:161) at org.dspace.util.SolrUpgradePre6xStatistics.run(SolrUpgradePre6xStatistics.java:456) at org.dspace.util.SolrUpgradePre6xStatistics.main(SolrUpgradePre6xStatistics.java:365) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:229) at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:81) ``` - I increased the Java heap memory to 4096MB and restarted the processing - After a few hours I got the following error, which I have gotten several times over the last few months: ``` Exception: Error while creating field 'p_group_id{type=uuid,properties=indexed,stored,multiValued}' from value '10' org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error while creating field 'p_group_id{type=uuid,properties=indexed,stored,multiValued}' from value '10' at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:552) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54) at org.dspace.util.SolrUpgradePre6xStatistics.batchUpdateStats(SolrUpgradePre6xStatistics.java:161) at org.dspace.util.SolrUpgradePre6xStatistics.run(SolrUpgradePre6xStatistics.java:456) at org.dspace.util.SolrUpgradePre6xStatistics.main(SolrUpgradePre6xStatistics.java:365) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:229) at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:81) ``` ## 2020-11-17 - Chat with Peter about using some remaining CRP Livestock open access money to fund more work on OpenRXV / AReS - I will create GitHub issues for each of the things we talked about and then create ToRs to send to CodeObia for a quote - Continue migrating Solr statistics to DSpace 6 UUID format after the upgrade on Sunday - Regarding the IWMI issue about flagships and strategic priorities we can use CRP Livestock as an example because all their [flagships are mapped to collections](https://cgspace.cgiar.org/handle/10568/80102) - Database issues are worse today... ![PostgreSQL connections week](/cgspace-notes/2020/11/postgres_connections_ALL-week2.png) ![PostgreSQL locks week](/cgspace-notes/2020/11/postgres_locks_ALL-week2.png) - There are over 2,000 locks: ``` $ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l 2071 ```