Peter gave feedback on the dc.rights proof of concept that I had sent him last week
We don’t need to distinguish between internal and external works, so that makes it just a simple list
Yesterday I figured out how to monitor DSpace sessions using JMX
I copied the logic in the jmx_tomcat_dbpools provided by Ubuntu’s munin-plugins-java package and used the stuff I discovered about JMX in 2018-01
Peter gave feedback on the dc.rights proof of concept that I had sent him last week
We don’t need to distinguish between internal and external works, so that makes it just a simple list
Yesterday I figured out how to monitor DSpace sessions using JMX
I copied the logic in the jmx_tomcat_dbpools provided by Ubuntu’s munin-plugins-java package and used the stuff I discovered about JMX in 2018-01
<li>Peter gave feedback on the <code>dc.rights</code> proof of concept that I had sent him last week</li>
<li>We don’t need to distinguish between internal and external works, so that makes it just a simple list</li>
<li>Yesterday I figured out how to monitor DSpace sessions using JMX</li>
<li>I copied the logic in the <code>jmx_tomcat_dbpools</code> provided by Ubuntu’s <code>munin-plugins-java</code> package and used the stuff I discovered about JMX <ahref="/cgspace-notes/2018-01/">in 2018-01</a></li>
<li><p>Wow, I packaged up the <code>jmx_dspace_sessions</code> stuff in the <ahref="https://github.com/ilri/rmg-ansible-public">Ansible infrastructure scripts</a> and deployed it on CGSpace and it totally works:</p>
<li>Bram from Atmire responded about the high load caused by the Solr updater script and said it will be fixed with the updates to DSpace 5.8 compatibility: <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=566">https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=566</a></li>
<li>We will close that ticket for now and wait for the 5.8 stuff: <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560">https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560</a></li>
<li>I finally took a look at the second round of cleanups Peter had sent me for author affiliations in mid January</li>
<pre><code>dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'affiliation') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/affiliations.csv with csv;
<li><p>Oh, and it looks like we processed over 3.1 million requests in January, up from 2.9 million in <ahref="/cgspace-notes/2017-12/">December</a>:</p>
<pre><code>dspace=# update metadatavalue set text_value=REGEXP_REPLACE(text_value, '\s+$' , '') where resource_type_id=2 and metadata_field_id=3 and text_value ~ '^.*?\s+$';
<pre><code>dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors-2018-02-05.csv with csv;
<li><p>I did notice in <code>/var/log/tomcat7/catalina.out</code> that Atmire’s update thing was running though</p></li>
<li><p>So I restarted Tomcat and now everything is fine</p></li>
<li><p>Next time I see that many database connections I need to save the output so I can analyze it later</p></li>
<li><p>I’m going to re-schedule the taskUpdateSolrStatsMetadata task as <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=566">Bram detailed in ticket 566</a> to see if it makes CGSpace stop crashing every morning</p></li>
<li><p>If I move the task from 3AM to 3PM, deally CGSpace will stop crashing in the morning, or start crashing ~12 hours later</p></li>
<li><p>Eventually Atmire has said that there will be a fix for this high load caused by their script, but it will come with the 5.8 compatability they are already working on</p></li>
<li><p>I re-deployed CGSpace with the new task time of 3PM, ran all system updates, and restarted the server</p></li>
<li><p>Also, I changed the name of the DSpace fallback pool on DSpace Test and CGSpace to be called ‘dspaceCli’ so that I can distinguish it in <code>pg_stat_activity</code></p></li>
<li><p>I implemented some changes to the pooling in the <ahref="https://github.com/ilri/rmg-ansible-public">Ansible infrastructure scripts</a> so that each DSpace web application can use its own pool (web, api, and solr)</p></li>
<li><p>Each pool uses its own name and hopefully this should help me figure out which one is using too many connections next time CGSpace goes down</p></li>
<li><p>Also, this will mean that when a search bot comes along and hammers the XMLUI, the REST and OAI applications will be fine</p></li>
<li><p>I’m not actually sure if the Solr web application uses the database though, so I’ll have to check later and remove it if necessary</p></li>
<li><p>I deployed the changes on DSpace Test only for now, so I will monitor and make them on CGSpace later this week</p></li>
<li>Abenet wrote to ask a question about the ORCiD lookup not working for one CIAT user on CGSpace</li>
<li>I tried on DSpace Test and indeed the lookup just doesn’t work!</li>
<li>The ORCiD code in DSpace appears to be using <code>http://pub.orcid.org/</code>, but when I go there in the browser it redirects me to <code>https://pub.orcid.org/v2.0/</code></li>
<li>According to <ahref="https://groups.google.com/forum/#!topic/orcid-api-users/qfg-HwAB1bk">the announcement</a> the v1 API was moved from <code>http://pub.orcid.org/</code> to <code>https://pub.orcid.org/v1.2</code> until March 1st when it will be discontinued for good</li>
<li>But the old URL is hard coded in DSpace and it doesn’t work anyways, because it currently redirects you to <code>https://pub.orcid.org/v2.0/v1.2</code></li>
<li>So I guess we have to disable that shit once and for all and switch to a controlled vocabulary</li>
<li>CGSpace crashed again, this time around <code>Wed Feb 7 11:20:28 UTC 2018</code></li>
<li><p>I took a few snapshots of the PostgreSQL activity at the time and as the minutes went on and the connections were very high at first but reduced on their own:</p>
<li><p>Since I was restarting Tomcat anyways, I decided to deploy the changes to create two different pools for web and API apps</p></li>
<li><p>Looking the Munin graphs, I can see that there were almost double the normal number of DSpace sessions at the time of the crash (and also yesterday!):</p></li>
<li><p>CGSpace went down again a few hours later, and now the connections to the dspaceWeb pool are maxed at 250 (the new limit I imposed with the new separate pool scheme)</p></li>
<li><p>What’s interesting is that the DSpace log says the connections are all busy:</p>
<pre><code>org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-127.0.0.1-8443-exec-328] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:250; busy:250; idle:0; lastwait:5000].
<li><p>What the fuck, does DSpace think all connections are busy?</p></li>
<li><p>I suspect these are issues with abandoned connections or maybe a leak, so I’m going to try adding the <code>removeAbandoned='true'</code> parameter which is apparently off by default</p></li>
<li><p>I will try <code>testOnReturn='true'</code> too, just to add more validation, because I’m fucking grasping at straws</p></li>
<li><p>Also, WTF, there was a heap space error randomly in catalina.out:</p>
<li><p>I’m trying to find a way to determine what was using all those Tomcat sessions, but parsing the DSpace log is hard because some IPs are IPv6, which contain colons!</p></li>
<li><p>Nice, so these are all known bots that are already crammed into one session by Tomcat’s Crawler Session Manager Valve.</p></li>
<li><p>What in the actual fuck, why is our load doing this? It’s gotta be something fucked up with the database pool being “busy” but everything is fucking idle</p></li>
<li><p>One that I should probably add in nginx is 54.83.138.123, which is apparently the following user agent:</p>
<li><p>So is this just some fucked up XMLUI database leaking?</p></li>
<li><p>I notice there is an issue (that I’ve probably noticed before) on the Jira tracker about this that was fixed in DSpace 5.7: <ahref="https://jira.duraspace.org/browse/DS-3551">https://jira.duraspace.org/browse/DS-3551</a></p></li>
<li><p>I seriously doubt this leaking shit is fixed for sure, but I’m gonna cherry-pick all those commits and try them on DSpace Test and probably even CGSpace because I’m fed up with this shit</p></li>
<li><p>I cherry-picked all the commits for DS-3551 but it won’t build on our current DSpace 5.5!</p></li>
<li><p>I sent a message to the dspace-tech mailing list asking why DSpace thinks these connections are busy when PostgreSQL says they are idle</p></li>
<li><p>Leave all settings but change choices.presentation to lookup and ORCID badge is there and item submission uses LC Name Authority and it breaks with this error:</p>
<li>Magdalena from CCAFS emailed to ask why one of their items has such a weird thumbnail: <ahref="https://cgspace.cgiar.org/handle/10568/90735"><sup>10568</sup>⁄<sub>90735</sub></a></li>
<li><p>In other news, psycopg2 is splitting their package in pip, so to install the binary wheel distribution you need to use <code>pip install psycopg2-binary</code></p></li>
<li><p>I updated my <code>fix-metadata-values.py</code> and <code>delete-metadata-values.py</code> scripts on the scripts page: <ahref="https://github.com/ilri/DSpace/wiki/Scripts">https://github.com/ilri/DSpace/wiki/Scripts</a></p></li>
<li><p>I ran the 342 author corrections (after trimming whitespace and excluding those with <code>||</code> and other syntax errors) on CGSpace:</p>
<li><p>That reminds me that Bizu had asked me to fix some of Alan Duncan’s names in December</p></li>
<li><p>I see he actually has some variations with “Duncan, Alan J.”: <ahref="https://cgspace.cgiar.org/discover?filtertype_1=author&filter_relational_operator_1=contains&filter_1=Duncan%2C+Alan&submit_apply_filter=&query=">https://cgspace.cgiar.org/discover?filtertype_1=author&filter_relational_operator_1=contains&filter_1=Duncan%2C+Alan&submit_apply_filter=&query=</a></p></li>
<li><p>I will just update those for her too and then restart the indexing:</p>
<pre><code>dspace=# select distinct text_value, authority, confidence from metadatavalue where resource_type_id=2 and metadata_field_id=3 and text_value like '%Duncan, Alan%';
dspace=# update metadatavalue set text_value='Duncan, Alan', authority='a6486522-b08a-4f7a-84f9-3a73ce56034d', confidence=600 where resource_type_id=2 and metadata_field_id=3 and text_value like 'Duncan, Alan%';
UPDATE 216
dspace=# select distinct text_value, authority, confidence from metadatavalue where resource_type_id=2 and metadata_field_id=3 and text_value like '%Duncan, Alan%';
<li><p>Run all system updates on DSpace Test (linode02) and reboot it</p></li>
<li><p>I wrote a Python script (<ahref="https://gist.github.com/alanorth/57a88379126d844563c1410bd7b8d12b"><code>resolve-orcids-from-solr.py</code></a>) using SolrClient to parse the Solr authority cache for ORCID IDs</p></li>
<li><p>We currently have 1562 authority records with ORCID IDs, and 624 unique IDs</p></li>
<li><p>We can use this to build a controlled vocabulary of ORCID IDs for new item submissions</p></li>
<li><p>I don’t know how to add ORCID IDs to existing items yet… some more querying of PostgreSQL for authority values perhaps?</p></li>
<li><p>I added the script to the <ahref="https://github.com/ilri/DSpace/wiki/Scripts">ILRI DSpace wiki on GitHub</a></p></li>
<li>Follow up with Atmire on the <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560">DSpace 5.8 Compatibility ticket</a> to ask again if they want me to send them a DSpace 5.8 branch to work on</li>
<li>Abenet asked if there was a way to get the number of submissions she and Bizuwork did</li>
<li>I said that the Atmire Workflow Statistics module was supposed to be able to do that</li>
<li>We had tried it in <ahref="/cgspace-notes/2017-06/">June, 2017</a> and found that it didn’t work</li>
<li>Atmire sent us some fixes but they didn’t work either</li>
<li>I just tried the branch with the fixes again and it indeed does not work:</li>
</ul>
<p><imgsrc="/cgspace-notes/2018/02/atmire-workflow-statistics.png"alt="Atmire Workflow Statistics No Data Available"/></p>
<ul>
<li>I see that in <ahref="/cgspace-notes/2017-04/">April, 2017</a> I just used a SQL query to get a user’s submissions by checking the <code>dc.description.provenance</code> field</li>
<li><p>I apparently added that on 2018-02-07 so it could be, as I don’t see any of those socket closed errors in 2018-01’s logs!</p></li>
<li><p>I will increase the removeAbandonedTimeout from its default of 60 to 90 and enable logAbandoned</p></li>
<li><p>Peter hit this issue one more time, and this is apparently what Tomcat’s catalina.out log says when an abandoned connection is removed:</p>
<li>Skype with Peter and the Addis team to discuss what we need to do for the ORCIDs in the immediate future</li>
<li>We said we’d start with a controlled vocabulary for <code>cg.creator.id</code> on the DSpace Test submission form, where we store the author name and the ORCID in some format like: Alan S. Orth (0000-0002-1735-7458)</li>
<li>Eventually we need to find a way to print the author names with links to their ORCID profiles</li>
<li>Abenet will send an email to the partners to give us ORCID IDs for their authors and to stress that they update their name format on ORCID.org if they want it in a special way</li>
<li>I sent the Codeobia guys a question to ask how they prefer that we store the IDs, ie one of:
<ul>
<li>Alan Orth - 0000-0002-1735-7458</li>
<li>Alan Orth: 0000-0002-1735-7458</li>
<li>Alan S. Orth (0000-0002-1735-7458)</li>
</ul></li>
<li>Atmire responded on the <ahref="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560">DSpace 5.8 compatability ticket</a> and said they will let me know if they they want me to give them a clean 5.8 branch</li>
<li><p>It seems the tidy fucks up accents, for example it turns <code>Adriana Tofiño (0000-0001-7115-7169)</code> into <code>Adriana Tofiño (0000-0001-7115-7169)</code></p></li>
<li><p>There are some formatting issues with names in Peter’s list, so I should remember to re-generate the list of names from ORCID’s API once we’re done</p></li>
<li><p>Then the cleanup process will continue for awhile and hit another foreign key conflict, and eventually it will complete after you manually resolve them all</p></li>
<li>Altmetric seems to be indexing DSpace Test for some reason:
<ul>
<li>See this item on DSpace Test: <ahref="https://dspacetest.cgiar.org/handle/10568/78450">https://dspacetest.cgiar.org/handle/10568/78450</a></li>
<li>See the corresponding page on Altmetric: <ahref="https://www.altmetric.com/details/handle/10568/78450">https://www.altmetric.com/details/handle/10568/78450</a></li>
</ul></li>
<li>And this item doesn’t even exist on CGSpace!</li>
<li>Start working on XMLUI item display code for ORCIDs</li>
<li>Send emails to Macaroni Bros and Usman at CIFOR about ORCID metadata</li>
<li>Peter pointed out that we had an incorrect sponsor in the controlled vocabulary: <code>U.S. Agency for International Development</code> → <code>United States Agency for International Development</code></li>
<li>I made a pull request to fix it ((#354)[<ahref="https://github.com/ilri/DSpace/pull/354]">https://github.com/ilri/DSpace/pull/354]</a>)</li>
<pre><code>dspace=# update metadatavalue set text_value='United States Agency for International Development' where resource_type_id=2 and metadata_field_id=29 and text_value like '%U.S. Agency for International Development%';
<li>ICARDA’s Mohamed Salem pointed out that it would be easiest to format the <code>cg.creator.id</code> field like “Alan Orth: 0000-0002-1735-7458” because no name will have a “:” so it’s easier to split on</li>
<li>I finally figured out a few ways to extract ORCID iDs from metadata using XSLT and display them in the XMLUI:</li>
</ul>
<p><imgsrc="/cgspace-notes/2018/02/xmlui-orcid-display.png"alt="Displaying ORCID iDs in XMLUI"/></p>
<ul>
<li>The one on the bottom left uses a similar format to our author display, and the one in the middle uses the format <ahref="https://orcid.org/trademark-and-id-display-guidelines">recommended by ORCID’s branding guidelines</a></li>
<li>Also, I realized that the Academicons font icon set we’re using includes an ORCID badge so we don’t need to use the PNG image anymore</li>
<li><p>I updated my <code>resolve-orcids-from-solr.py</code> script to be able to resolve ORCID identifiers from a text file so I renamed it to <code>resolve-orcids.py</code></p></li>
<li><p>Also, I updated it so it uses several new options:</p>
<li>Send Abenet an email about getting a purchase requisition for a new DSpace Test server on Linode</li>
<li>Discuss some of the issues with null values and poor-quality names in some ORCID identifiers with Abenet and I think we’ll now only use ORCID iDs that have been sent to use from partners, not those extracted via keyword searches on orcid.org</li>
<li><p>This should be the version we use (the existing controlled vocabulary generated from CGSpace’s Solr authority core plus the IDs sent to us so far by partners):</p>
<li><p>I updated the <code>resolve-orcids.py</code> to use the “credit-name” if it exists in a profile, falling back to “given-names” + “family-name”</p></li>
<li><p>Also, I added color coded output to the debug messages and added a “quiet” mode that supresses the normal behavior of printing results to the screen</p></li>
<li><p>I’m using this as the test input for <code>resolve-orcids.py</code>:</p>
<li><p>Help debug issues with Altmetric badges again, it looks like Altmetric is all kinds of fucked up</p></li>
<li><p>Last week I pointed out that they were tracking Handles from our test server</p></li>
<li><p>Now, their API is responding with content that is marked as content-type JSON but is not valid JSON</p></li>
<li><p>For example, this item: <ahref="https://cgspace.cgiar.org/handle/10568/83320">https://cgspace.cgiar.org/handle/10568/83320</a></p></li>
<li><p>The Altmetric JavaScript builds the following API call: <ahref="https://api.altmetric.com/v1/handle/10568/83320?callback=_altmetric.embed_callback&domain=cgspace.cgiar.org&key=3c130976ca2b8f2e88f8377633751ba1&cache_until=13-20">https://api.altmetric.com/v1/handle/10568/83320?callback=_altmetric.embed_callback&domain=cgspace.cgiar.org&key=3c130976ca2b8f2e88f8377633751ba1&cache_until=13-20</a></p></li>
<li><p>The response body is <em>not</em> JSON</p></li>
<li><p>To contrast, the following bare API call without query parameters is valid JSON: <ahref="https://api.altmetric.com/v1/handle/10568/83320">https://api.altmetric.com/v1/handle/10568/83320</a></p></li>
<li><p>I told them that it’s their JavaScript that is fucked up</p></li>
<li><p>Remove CPWF project number and Humidtropics subject from submission form (<ahref="https://github.com/alanorth/DSpace/pull/3">#3</a>)</p></li>
<li><p>I accidentally merged it into my own repository, oops</p></li>
<li><p>It seems to re-use its user agent but makes tons of useless requests and I wonder if I should add “.<em>spider.</em>” to the Tomcat Crawler Session Manager valve?</p></li>
<li><p>I will add them to DSpace Test but Abenet says she’s still waiting to set us ILRI’s list</p></li>
<li><p>I will tell her that we should proceed on sharing our work on DSpace Test with the partners this week anyways and we can update the list later</p></li>
<li><p>While regenerating the names for these ORCID identifiers I saw <ahref="https://pub.orcid.org/v2.1/0000-0002-2614-426X/person">one that has a weird value for its names</a>:</p>
<li><p>I don’t know if the user accidentally entered this as their name or if that’s how ORCID behaves when the name is private?</p></li>
<li><p>I will remove that one from our list for now</p></li>
<li><p>Remove Dryland Systems subject from submission form because that CRP closed two years ago (<ahref="https://github.com/ilri/DSpace/pull/355">#355</a>)</p></li>
<li><p>Run all system updates on DSpace Test</p></li>
<li><p>Email ICT to ask how to proceed with the OCS proforma issue for the new DSpace Test server on Linode</p></li>
<li><p>Thinking about how to preserve ORCID identifiers attached to existing items in CGSpace</p></li>
<li><p>We have over 60,000 unique author + authority combinations on CGSpace:</p>
<li><p>I know from earlier this month that there are only 624 unique ORCID identifiers in the Solr authority core, so it’s way easier to just fetch the unique ORCID iDs from Solr and then go back to PostgreSQL and do the metadata mapping that way</p></li>
<li><p>The query in Solr would simply be <code>orcid_id:*</code></p></li>
<li><p>Assuming I know that authority record with <code>id:d7ef744b-bbd4-4171-b449-00e37e1b776f</code>, then I could query PostgreSQL for all metadata records using that authority:</p>
<li>Peter is having problems with “Socket closed” on his submissions page again</li>
<li>He says his personal account loads much faster than his CGIAR account, which could be because the CGIAR account has potentially thousands of submissions over the last few years</li>
<li>I don’t know why it would take so long, but this logic kinda makes sense</li>
<li>Peter is still having problems with “Socket closed” on his submissions page</li>
<li>I have disabled <code>removeAbandoned</code> for now because that’s the only thing I changed in the last few weeks since he started having issues</li>
<li>I think the real line of logic to follow here is why the submissions page is so slow for him (presumably because of loading all his submissions?)</li>
<li>I need to see which SQL queries are run during that time</li>
<li><p>And only a few hours after I disabled the <code>removeAbandoned</code> thing CGSpace went down and lo and behold, there were 264 connections, most of which were idle:</p>
<li><p>… but according to the <ahref="https://www.postgresql.org/docs/9.5/static/view-pg-locks.html">pg_locks documentation</a> I should have done this to correlate the locks with the activity:</p>
<li>abandonWhenPercentageFull: Only start cleaning up abandoned connections if the pool is used for more than X %.</li>
<li>jdbcInterceptors=‘ResetAbandonedTimer’: Make sure the “abondoned” timer is reset every time there is activity on a connection</li>
<li><p>According to the log 01D9932D6E85E90C2BA9FF5563A76D03 is an ILRI editor, doing lots of updating and editing of items</p></li>
<li><p>8100883DAD00666A655AE8EC571C95AE is some Indian IP address</p></li>
<li><p>1E9834E918A550C5CD480076BC1B73A4 looks to be a session shared by the bots</p></li>
<li><p>So maybe it was due to the editor’s uploading of files, perhaps something that was too big or?</p></li>
<li><p>I think I’ll increase the JVM heap size on CGSpace from 6144m to 8192m because I’m sick of this random crashing shit and the server has memory and I’d rather eliminate this so I can get back to solving PostgreSQL issues and doing other real work</p></li>
<li><p>Run the few corrections from earlier this month for sponsor on CGSpace:</p>
<pre><code>cgspace=# update metadatavalue set text_value='United States Agency for International Development' where resource_type_id=2 and metadata_field_id=29 and text_value like '%U.S. Agency for International Development%';
<li><p>I took a few snapshots during the process and noticed 500, 800, and even 2000 locks at certain times during the process</p></li>
<li><p>Afterwards I looked a few times and saw only 150 or 200 locks</p></li>
<li><p>On the test server, with the <ahref="https://jira.duraspace.org/browse/DS-3636">PostgreSQL indexes from DS-3636</a> applied, it finished instantly</p></li>
<li><p>Run system updates on DSpace Test and reboot the server</p></li>