I copied the logic in the jmx_tomcat_dbpools provided by Ubuntu’s munin-plugins-java package and used the stuff I discovered about JMX in 2018-01
I copied the logic in the jmx_tomcat_dbpools provided by Ubuntu’s munin-plugins-java package and used the stuff I discovered about JMX in 2018-01
<li>I copied the logic in the <code>jmx_tomcat_dbpools</code> provided by Ubuntu’s <code>munin-plugins-java</code> package and used the stuff I discovered about JMX <ahref="/cgspace-notes/2018-01/">in 2018-01</a></li>
<li>Wow, I packaged up the <code>jmx_dspace_sessions</code> stuff in the <ahref="https://github.com/ilri/rmg-ansible-public">Ansible infrastructure scripts</a> and deployed it on CGSpace and it totally works:</li>
<li>Bram from Atmire responded about the high load caused by the Solr updater script and said it will be fixed with the updates to DSpace 5.8 compatibility: <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=566">https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=566</a></li>
<li>We will close that ticket for now and wait for the 5.8 stuff: <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560">https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560</a></li>
<li>I finally took a look at the second round of cleanups Peter had sent me for author affiliations in mid January</li>
<pretabindex="0"><code>dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'affiliation') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/affiliations.csv with csv;
<li>Oh, and it looks like we processed over 3.1 million requests in January, up from 2.9 million in <ahref="/cgspace-notes/2017-12/">December</a>:</li>
<pretabindex="0"><code>dspace=# update metadatavalue set text_value=REGEXP_REPLACE(text_value, '\s+$' , '') where resource_type_id=2 and metadata_field_id=3 and text_value ~ '^.*?\s+$';
<pretabindex="0"><code>dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors-2018-02-05.csv with csv;
<li>I’m going to re-schedule the taskUpdateSolrStatsMetadata task as <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=566">Bram detailed in ticket 566</a> to see if it makes CGSpace stop crashing every morning</li>
<li>Eventually Atmire has said that there will be a fix for this high load caused by their script, but it will come with the 5.8 compatability they are already working on</li>
<li>I re-deployed CGSpace with the new task time of 3PM, ran all system updates, and restarted the server</li>
<li>Also, I changed the name of the DSpace fallback pool on DSpace Test and CGSpace to be called ‘dspaceCli’ so that I can distinguish it in <code>pg_stat_activity</code></li>
<li>I implemented some changes to the pooling in the <ahref="https://github.com/ilri/rmg-ansible-public">Ansible infrastructure scripts</a> so that each DSpace web application can use its own pool (web, api, and solr)</li>
<li>Each pool uses its own name and hopefully this should help me figure out which one is using too many connections next time CGSpace goes down</li>
<li>Also, this will mean that when a search bot comes along and hammers the XMLUI, the REST and OAI applications will be fine</li>
<li>I’m not actually sure if the Solr web application uses the database though, so I’ll have to check later and remove it if necessary</li>
<li>The ORCiD code in DSpace appears to be using <code>http://pub.orcid.org/</code>, but when I go there in the browser it redirects me to <code>https://pub.orcid.org/v2.0/</code></li>
<li>According to <ahref="https://groups.google.com/forum/#!topic/orcid-api-users/qfg-HwAB1bk">the announcement</a> the v1 API was moved from <code>http://pub.orcid.org/</code> to <code>https://pub.orcid.org/v1.2</code> until March 1st when it will be discontinued for good</li>
<li>But the old URL is hard coded in DSpace and it doesn’t work anyways, because it currently redirects you to <code>https://pub.orcid.org/v2.0/v1.2</code></li>
<li>I took a few snapshots of the PostgreSQL activity at the time and as the minutes went on and the connections were very high at first but reduced on their own:</li>
<li>Since I was restarting Tomcat anyways, I decided to deploy the changes to create two different pools for web and API apps</li>
<li>Looking the Munin graphs, I can see that there were almost double the normal number of DSpace sessions at the time of the crash (and also yesterday!):</li>
<li>CGSpace went down again a few hours later, and now the connections to the dspaceWeb pool are maxed at 250 (the new limit I imposed with the new separate pool scheme)</li>
<pretabindex="0"><code>org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-127.0.0.1-8443-exec-328] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:250; busy:250; idle:0; lastwait:5000].
<li>I suspect these are issues with abandoned connections or maybe a leak, so I’m going to try adding the <code>removeAbandoned='true'</code> parameter which is apparently off by default</li>
<li>I will try <code>testOnReturn='true'</code> too, just to add more validation, because I’m fucking grasping at straws</li>
<li>I’m trying to find a way to determine what was using all those Tomcat sessions, but parsing the DSpace log is hard because some IPs are IPv6, which contain colons!</li>
<li>Nice, so these are all known bots that are already crammed into one session by Tomcat’s Crawler Session Manager Valve.</li>
<li>What in the actual fuck, why is our load doing this? It’s gotta be something fucked up with the database pool being “busy” but everything is fucking idle</li>
<li>I notice there is an issue (that I’ve probably noticed before) on the Jira tracker about this that was fixed in DSpace 5.7: <ahref="https://jira.duraspace.org/browse/DS-3551">https://jira.duraspace.org/browse/DS-3551</a></li>
<li>I seriously doubt this leaking shit is fixed for sure, but I’m gonna cherry-pick all those commits and try them on DSpace Test and probably even CGSpace because I’m fed up with this shit</li>
<li>I cherry-picked all the commits for DS-3551 but it won’t build on our current DSpace 5.5!</li>
<li>Leave all settings but change choices.presentation to lookup and ORCID badge is there and item submission uses LC Name Authority and it breaks with this error:</li>
<li>Magdalena from CCAFS emailed to ask why one of their items has such a weird thumbnail: <ahref="https://cgspace.cgiar.org/handle/10568/90735">10568/90735</a></li>
<li>The <code>isutf8</code> program comes from <code>moreutils</code></li>
<li>Line 100 contains: Galiè, Alessandra</li>
<li>In other news, psycopg2 is splitting their package in pip, so to install the binary wheel distribution you need to use <code>pip install psycopg2-binary</code></li>
<li>I updated my <code>fix-metadata-values.py</code> and <code>delete-metadata-values.py</code> scripts on the scripts page: <ahref="https://github.com/ilri/DSpace/wiki/Scripts">https://github.com/ilri/DSpace/wiki/Scripts</a></li>
<li>I ran the 342 author corrections (after trimming whitespace and excluding those with <code>||</code> and other syntax errors) on CGSpace:</li>
<li>I see he actually has some variations with “Duncan, Alan J.": <ahref="https://cgspace.cgiar.org/discover?filtertype_1=author&filter_relational_operator_1=contains&filter_1=Duncan%2C+Alan&submit_apply_filter=&query=">https://cgspace.cgiar.org/discover?filtertype_1=author&filter_relational_operator_1=contains&filter_1=Duncan%2C+Alan&submit_apply_filter=&query=</a></li>
<li>I will just update those for her too and then restart the indexing:</li>
<pretabindex="0"><code>dspace=# select distinct text_value, authority, confidence from metadatavalue where resource_type_id=2 and metadata_field_id=3 and text_value like '%Duncan, Alan%';
dspace=# update metadatavalue set text_value='Duncan, Alan', authority='a6486522-b08a-4f7a-84f9-3a73ce56034d', confidence=600 where resource_type_id=2 and metadata_field_id=3 and text_value like 'Duncan, Alan%';
UPDATE 216
dspace=# select distinct text_value, authority, confidence from metadatavalue where resource_type_id=2 and metadata_field_id=3 and text_value like '%Duncan, Alan%';
<li>Run all system updates on DSpace Test (linode02) and reboot it</li>
<li>I wrote a Python script (<ahref="https://gist.github.com/alanorth/57a88379126d844563c1410bd7b8d12b"><code>resolve-orcids-from-solr.py</code></a>) using SolrClient to parse the Solr authority cache for ORCID IDs</li>
<li>We currently have 1562 authority records with ORCID IDs, and 624 unique IDs</li>
<li>We can use this to build a controlled vocabulary of ORCID IDs for new item submissions</li>
<li>Follow up with Atmire on the <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560">DSpace 5.8 Compatibility ticket</a> to ask again if they want me to send them a DSpace 5.8 branch to work on</li>
<li>Abenet asked if there was a way to get the number of submissions she and Bizuwork did</li>
<li>I said that the Atmire Workflow Statistics module was supposed to be able to do that</li>
<li>I see that in <ahref="/cgspace-notes/2017-04/">April, 2017</a> I just used a SQL query to get a user’s submissions by checking the <code>dc.description.provenance</code> field</li>
<pretabindex="0"><code>dspace=# select * from metadatavalue where resource_type_id=2 and metadata_field_id=28 and text_value ~ '^Submitted.*yabowork.*2017-12.*';
<li>We said we’d start with a controlled vocabulary for <code>cg.creator.id</code> on the DSpace Test submission form, where we store the author name and the ORCID in some format like: Alan S. Orth (0000-0002-1735-7458)</li>
<li>Eventually we need to find a way to print the author names with links to their ORCID profiles</li>
<li>Abenet will send an email to the partners to give us ORCID IDs for their authors and to stress that they update their name format on ORCID.org if they want it in a special way</li>
<li>I sent the Codeobia guys a question to ask how they prefer that we store the IDs, ie one of:
<li>Atmire responded on the <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560">DSpace 5.8 compatability ticket</a> and said they will let me know if they they want me to give them a clean 5.8 branch</li>
<li>It seems the tidy fucks up accents, for example it turns <code>Adriana Tofiño (0000-0001-7115-7169)</code> into <code>Adriana Tofiño (0000-0001-7115-7169)</code></li>
<li>There are some formatting issues with names in Peter’s list, so I should remember to re-generate the list of names from ORCID’s API once we’re done</li>
<li>Then the cleanup process will continue for awhile and hit another foreign key conflict, and eventually it will complete after you manually resolve them all</li>
<li>Altmetric seems to be indexing DSpace Test for some reason:
<ul>
<li>See this item on DSpace Test: <ahref="https://dspacetest.cgiar.org/handle/10568/78450">https://dspacetest.cgiar.org/handle/10568/78450</a></li>
<li>See the corresponding page on Altmetric: <ahref="https://www.altmetric.com/details/handle/10568/78450">https://www.altmetric.com/details/handle/10568/78450</a></li>
<li>Peter pointed out that we had an incorrect sponsor in the controlled vocabulary: <code>U.S. Agency for International Development</code> → <code>United States Agency for International Development</code></li>
<pretabindex="0"><code>dspace=# update metadatavalue set text_value='United States Agency for International Development' where resource_type_id=2 and metadata_field_id=29 and text_value like '%U.S. Agency for International Development%';
<li>ICARDA’s Mohamed Salem pointed out that it would be easiest to format the <code>cg.creator.id</code> field like “Alan Orth: 0000-0002-1735-7458” because no name will have a “:” so it’s easier to split on</li>
<li>The one on the bottom left uses a similar format to our author display, and the one in the middle uses the format <ahref="https://orcid.org/trademark-and-id-display-guidelines">recommended by ORCID’s branding guidelines</a></li>
<li>Also, I realized that the Academicons font icon set we’re using includes an ORCID badge so we don’t need to use the PNG image anymore</li>
<li>I updated my <code>resolve-orcids-from-solr.py</code> script to be able to resolve ORCID identifiers from a text file so I renamed it to <code>resolve-orcids.py</code></li>
<li>Also, I updated it so it uses several new options:</li>
<li>Discuss some of the issues with null values and poor-quality names in some ORCID identifiers with Abenet and I think we’ll now only use ORCID iDs that have been sent to use from partners, not those extracted via keyword searches on orcid.org</li>
<li>This should be the version we use (the existing controlled vocabulary generated from CGSpace’s Solr authority core plus the IDs sent to us so far by partners):</li>
<li>I updated the <code>resolve-orcids.py</code> to use the “credit-name” if it exists in a profile, falling back to “given-names” + “family-name”</li>
<li>Also, I added color coded output to the debug messages and added a “quiet” mode that supresses the normal behavior of printing results to the screen</li>
<li>Help debug issues with Altmetric badges again, it looks like Altmetric is all kinds of fucked up</li>
<li>Last week I pointed out that they were tracking Handles from our test server</li>
<li>Now, their API is responding with content that is marked as content-type JSON but is not valid JSON</li>
<li>For example, this item: <ahref="https://cgspace.cgiar.org/handle/10568/83320">https://cgspace.cgiar.org/handle/10568/83320</a></li>
<li>The Altmetric JavaScript builds the following API call: <ahref="https://api.altmetric.com/v1/handle/10568/83320?callback=_altmetric.embed_callback&domain=cgspace.cgiar.org&key=3c130976ca2b8f2e88f8377633751ba1&cache_until=13-20">https://api.altmetric.com/v1/handle/10568/83320?callback=_altmetric.embed_callback&domain=cgspace.cgiar.org&key=3c130976ca2b8f2e88f8377633751ba1&cache_until=13-20</a></li>
<li>The response body is <em>not</em> JSON</li>
<li>To contrast, the following bare API call without query parameters is valid JSON: <ahref="https://api.altmetric.com/v1/handle/10568/83320">https://api.altmetric.com/v1/handle/10568/83320</a></li>
<li>It seems to re-use its user agent but makes tons of useless requests and I wonder if I should add “.<em>spider.</em>” to the Tomcat Crawler Session Manager valve?</li>
<li>I will tell her that we should proceed on sharing our work on DSpace Test with the partners this week anyways and we can update the list later</li>
<li>While regenerating the names for these ORCID identifiers I saw <ahref="https://pub.orcid.org/v2.1/0000-0002-2614-426X/person">one that has a weird value for its names</a>:</li>
<li>I will remove that one from our list for now</li>
<li>Remove Dryland Systems subject from submission form because that CRP closed two years ago (<ahref="https://github.com/ilri/DSpace/pull/355">#355</a>)</li>
<li>Run all system updates on DSpace Test</li>
<li>Email ICT to ask how to proceed with the OCS proforma issue for the new DSpace Test server on Linode</li>
<li>Thinking about how to preserve ORCID identifiers attached to existing items in CGSpace</li>
<li>We have over 60,000 unique author + authority combinations on CGSpace:</li>
<li>I know from earlier this month that there are only 624 unique ORCID identifiers in the Solr authority core, so it’s way easier to just fetch the unique ORCID iDs from Solr and then go back to PostgreSQL and do the metadata mapping that way</li>
<li>The query in Solr would simply be <code>orcid_id:*</code></li>
<li>Assuming I know that authority record with <code>id:d7ef744b-bbd4-4171-b449-00e37e1b776f</code>, then I could query PostgreSQL for all metadata records using that authority:</li>
<li>Peter is having problems with “Socket closed” on his submissions page again</li>
<li>He says his personal account loads much faster than his CGIAR account, which could be because the CGIAR account has potentially thousands of submissions over the last few years</li>
<li>I have disabled <code>removeAbandoned</code> for now because that’s the only thing I changed in the last few weeks since he started having issues</li>
<li>I think the real line of logic to follow here is why the submissions page is so slow for him (presumably because of loading all his submissions?)</li>
<li>I need to see which SQL queries are run during that time</li>
<li>And only a few hours after I disabled the <code>removeAbandoned</code> thing CGSpace went down and lo and behold, there were 264 connections, most of which were idle:</li>
<li>… but according to the <ahref="https://www.postgresql.org/docs/9.5/static/view-pg-locks.html">pg_locks documentation</a> I should have done this to correlate the locks with the activity:</li>
<li>jdbcInterceptors=‘ResetAbandonedTimer’: Make sure the “abondoned” timer is reset every time there is activity on a connection</li>
<li>So maybe it was due to the editor’s uploading of files, perhaps something that was too big or?</li>
<li>I think I’ll increase the JVM heap size on CGSpace from 6144m to 8192m because I’m sick of this random crashing shit and the server has memory and I’d rather eliminate this so I can get back to solving PostgreSQL issues and doing other real work</li>
<pretabindex="0"><code>cgspace=# update metadatavalue set text_value='United States Agency for International Development' where resource_type_id=2 and metadata_field_id=29 and text_value like '%U.S. Agency for International Development%';
<li>I took a few snapshots during the process and noticed 500, 800, and even 2000 locks at certain times during the process</li>
<li>Afterwards I looked a few times and saw only 150 or 200 locks</li>
<li>On the test server, with the <ahref="https://jira.duraspace.org/browse/DS-3636">PostgreSQL indexes from DS-3636</a> applied, it finished instantly</li>
<li>Run system updates on DSpace Test and reboot the server</li>