cgspace-notes/docs/2019-03/index.html
2024-04-27 11:22:58 +03:00

1263 lines
76 KiB
HTML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html lang="en" >
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta property="og:title" content="March, 2019" />
<meta property="og:description" content="2019-03-01
I checked IITA&rsquo;s 259 Feb 14 records from last month for duplicates using Atmire&rsquo;s Duplicate Checker on a fresh snapshot of CGSpace on my local machine and everything looks good
I am now only waiting to hear from her about where the items should go, though I assume Journal Articles go to IITA Journal Articles collection, etc&hellip;
Looking at the other half of Udana&rsquo;s WLE records from 2018-11
I finished the ones for Restoring Degraded Landscapes (RDL), but these are for Variability, Risks and Competing Uses (VRC)
I did the usual cleanups for whitespace, added regions where they made sense for certain countries, cleaned up the DOI link formats, added rights information based on the publications page for a few items
Most worryingly, there are encoding errors in the abstracts for eleven items, for example:
68.15% <20> 9.45 instead of 68.15% ± 9.45
2003<EFBFBD>2013 instead of 20032013
I think I will need to ask Udana to re-copy and paste the abstracts with more care using Google Docs
" />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2019-03/" />
<meta property="article:published_time" content="2019-03-01T12:16:30+01:00" />
<meta property="article:modified_time" content="2020-07-24T21:57:55+03:00" />
<meta name="twitter:card" content="summary"/>
<meta name="twitter:title" content="March, 2019"/>
<meta name="twitter:description" content="2019-03-01
I checked IITA&rsquo;s 259 Feb 14 records from last month for duplicates using Atmire&rsquo;s Duplicate Checker on a fresh snapshot of CGSpace on my local machine and everything looks good
I am now only waiting to hear from her about where the items should go, though I assume Journal Articles go to IITA Journal Articles collection, etc&hellip;
Looking at the other half of Udana&rsquo;s WLE records from 2018-11
I finished the ones for Restoring Degraded Landscapes (RDL), but these are for Variability, Risks and Competing Uses (VRC)
I did the usual cleanups for whitespace, added regions where they made sense for certain countries, cleaned up the DOI link formats, added rights information based on the publications page for a few items
Most worryingly, there are encoding errors in the abstracts for eleven items, for example:
68.15% <20> 9.45 instead of 68.15% ± 9.45
2003<EFBFBD>2013 instead of 20032013
I think I will need to ask Udana to re-copy and paste the abstracts with more care using Google Docs
"/>
<meta name="generator" content="Hugo 0.125.4">
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "BlogPosting",
"headline": "March, 2019",
"url": "https://alanorth.github.io/cgspace-notes/2019-03/",
"wordCount": "7105",
"datePublished": "2019-03-01T12:16:30+01:00",
"dateModified": "2020-07-24T21:57:55+03:00",
"author": {
"@type": "Person",
"name": "Alan Orth"
},
"keywords": "Notes"
}
</script>
<link rel="canonical" href="https://alanorth.github.io/cgspace-notes/2019-03/">
<title>March, 2019 | CGSpace Notes</title>
<!-- combined, minified CSS -->
<link href="https://alanorth.github.io/cgspace-notes/css/style.c6ba80bc50669557645abe05f86b73cc5af84408ed20f1551a267bc19ece8228.css" rel="stylesheet" integrity="sha256-xrqAvFBmlVdkWr4F&#43;GtzzFr4RAjtIPFVGiZ7wZ7Ogig=" crossorigin="anonymous">
<!-- minified Font Awesome for SVG icons -->
<script defer src="https://alanorth.github.io/cgspace-notes/js/fontawesome.min.f5072c55a0721857184db93a50561d7dc13975b4de2e19db7f81eb5f3fa57270.js" integrity="sha256-9QcsVaByGFcYTbk6UFYdfcE5dbTeLhnbf4HrXz&#43;lcnA=" crossorigin="anonymous"></script>
<!-- RSS 2.0 feed -->
</head>
<body>
<div class="blog-masthead">
<div class="container">
<nav class="nav blog-nav">
<a class="nav-link " href="https://alanorth.github.io/cgspace-notes/">Home</a>
</nav>
</div>
</div>
<header class="blog-header">
<div class="container">
<h1 class="blog-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/" rel="home">CGSpace Notes</a></h1>
<p class="lead blog-description" dir="auto">Documenting day-to-day work on the <a href="https://cgspace.cgiar.org">CGSpace</a> repository.</p>
</div>
</header>
<div class="container">
<div class="row">
<div class="col-sm-8 blog-main">
<article class="blog-post">
<header>
<h2 class="blog-post-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/2019-03/">March, 2019</a></h2>
<p class="blog-post-meta">
<time datetime="2019-03-01T12:16:30+01:00">Fri Mar 01, 2019</time>
in
<span class="fas fa-folder" aria-hidden="true"></span>&nbsp;<a href="/categories/notes/" rel="category tag">Notes</a>
</p>
</header>
<h2 id="2019-03-01">2019-03-01</h2>
<ul>
<li>I checked IITA&rsquo;s 259 Feb 14 records from last month for duplicates using Atmire&rsquo;s Duplicate Checker on a fresh snapshot of CGSpace on my local machine and everything looks good</li>
<li>I am now only waiting to hear from her about where the items should go, though I assume Journal Articles go to IITA Journal Articles collection, etc&hellip;</li>
<li>Looking at the other half of Udana&rsquo;s WLE records from 2018-11
<ul>
<li>I finished the ones for Restoring Degraded Landscapes (RDL), but these are for Variability, Risks and Competing Uses (VRC)</li>
<li>I did the usual cleanups for whitespace, added regions where they made sense for certain countries, cleaned up the DOI link formats, added rights information based on the publications page for a few items</li>
<li>Most worryingly, there are encoding errors in the abstracts for eleven items, for example:</li>
<li>68.15% <20> 9.45 instead of 68.15% ± 9.45</li>
<li>2003<EFBFBD>2013 instead of 20032013</li>
</ul>
</li>
<li>I think I will need to ask Udana to re-copy and paste the abstracts with more care using Google Docs</li>
</ul>
<h2 id="2019-03-03">2019-03-03</h2>
<ul>
<li>Trying to finally upload IITA&rsquo;s 259 Feb 14 items to CGSpace so I exported them from DSpace Test:</li>
</ul>
<pre tabindex="0"><code>$ mkdir 2019-03-03-IITA-Feb14
$ dspace export -i 10568/108684 -t COLLECTION -m -n 0 -d 2019-03-03-IITA-Feb14
</code></pre><ul>
<li>As I was inspecting the archive I noticed that there were some problems with the bitsreams:
<ul>
<li>First, Sisay didn&rsquo;t include the bitstream descriptions</li>
<li>Second, only five items had bitstreams and I remember in the discussion with IITA that there should have been nine!</li>
<li>I had to refer to the original CSV from January to find the file names, then download and add them to the export contents manually!</li>
</ul>
</li>
<li>After adding the missing bitstreams and descriptions manually I tested them again locally, then imported them to a temporary collection on CGSpace:</li>
</ul>
<pre tabindex="0"><code>$ dspace import -a -c 10568/99832 -e aorth@stfu.com -m 2019-03-03-IITA-Feb14.map -s /tmp/2019-03-03-IITA-Feb14
</code></pre><ul>
<li>DSpace&rsquo;s export function doesn&rsquo;t include the collections for some reason, so you need to import them somewhere first, then export the collection metadata and re-map the items to proper owning collections based on their types using OpenRefine or something</li>
<li>After re-importing to CGSpace to apply the mappings, I deleted the collection on DSpace Test and ran the <code>dspace cleanup</code> script</li>
<li>Merge the IITA research theme changes from last month to the <code>5_x-prod</code> branch (<a href="https://github.com/ilri/DSpace/pull/413">#413</a>)
<ul>
<li>I will deploy to CGSpace soon and then think about how to batch tag all IITA&rsquo;s existing items with this metadata</li>
</ul>
</li>
<li>Deploy Tomcat 7.0.93 on CGSpace (linode18) after having tested it on DSpace Test (linode19) for a week</li>
</ul>
<h2 id="2019-03-06">2019-03-06</h2>
<ul>
<li>Abenet was having problems with a CIP user account, I think that the user could not register</li>
<li>I suspect it&rsquo;s related to the email issue that ICT hasn&rsquo;t responded about since last week</li>
<li>As I thought, I still cannot send emails from CGSpace:</li>
</ul>
<pre tabindex="0"><code>$ dspace test-email
About to send test email:
- To: blah@stfu.com
- Subject: DSpace test email
- Server: smtp.office365.com
Error sending email:
- Error: javax.mail.AuthenticationFailedException
</code></pre><ul>
<li>I will send a follow-up to ICT to ask them to reset the password</li>
</ul>
<h2 id="2019-03-07">2019-03-07</h2>
<ul>
<li>ICT reset the email password and I confirmed that it is working now</li>
<li>Generate a controlled vocabulary of 1187 AGROVOC subjects from the top 1500 that I checked last month, dumping the terms themselves using <code>csvcut</code> and then applying XML controlled vocabulary format in vim and then checking with tidy for good measure:</li>
</ul>
<pre tabindex="0"><code>$ csvcut -c name 2019-02-22-subjects.csv &gt; dspace/config/controlled-vocabularies/dc-contributor-author.xml
$ # apply formatting in XML file
$ tidy -xml -utf8 -iq -m -w 0 dspace/config/controlled-vocabularies/dc-subject.xml
</code></pre><ul>
<li>I tested the AGROVOC controlled vocabulary locally and will deploy it on DSpace Test soon so people can see it</li>
<li>Atmire noticed my message about the &ldquo;solr_update_time_stamp&rdquo; error on the dspace-tech mailing list and created an issue on their tracker to discuss it with me
<ul>
<li>They say the error is harmless, but has nevertheless been fixed in their newer module versions</li>
</ul>
</li>
</ul>
<h2 id="2019-03-08">2019-03-08</h2>
<ul>
<li>There&rsquo;s an issue with CGSpace right now where all items are giving a blank page in the XMLUI
<ul>
<li><del>Interestingly, if I check an item in the REST API it is also mostly blank: only the title and the ID!</del> On second thought I realize I probably was just seeing the default view without any &ldquo;expands&rdquo;</li>
<li>I don&rsquo;t see anything unusual in the Tomcat logs, though there are thousands of those <code>solr_update_time_stamp</code> errors:</li>
</ul>
</li>
</ul>
<pre tabindex="0"><code># journalctl -u tomcat7 | grep -c &#39;Multiple update components target the same field:solr_update_time_stamp&#39;
1076
</code></pre><ul>
<li>I restarted Tomcat and it&rsquo;s OK now&hellip;</li>
<li>Skype meeting with Peter and Abenet and Sisay
<ul>
<li>We want to try to crowd source the correction of invalid AGROVOC terms starting with the ~313 invalid ones from our top 1500</li>
<li>We will share a Google Docs spreadsheet with the partners and ask them to mark the deletions and corrections</li>
<li>Abenet and Alan to spend some time identifying correct DCTERMS fields to move to, with preference over CG Core 2.0 as we want to be globally compliant (use information from SEO crosswalks)</li>
<li>I need to follow up on the privacy page that Sisay worked on</li>
<li>We want to try to migrate the 600 <a href="https://livestock.cgiar.org">Livestock CRP blog posts</a> to CGSpace, Peter will try to export the XML from WordPress so I can try to parse it with a script</li>
</ul>
</li>
</ul>
<h2 id="2019-03-09">2019-03-09</h2>
<ul>
<li>I shared a post on Yammer informing our editors to try to AGROVOC controlled list</li>
<li>The SPDX legal committee had a meeting and discussed the addition of CC-BY-ND-3.0-IGO and other IGO licenses to their list, but it seems unlikely (<a href="https://github.com/spdx/license-list-XML/issues/767#issuecomment-470709673">spdx/license-list-XML/issues/767</a>)</li>
<li>The FireOak report highlights the fact that several CGSpace collections have mixed-content errors due to the use of HTTP links in the Feedburner forms</li>
<li>I see 46 occurrences of these with this query:</li>
</ul>
<pre tabindex="0"><code>dspace=# SELECT text_value FROM metadatavalue WHERE resource_type_id in (3,4) AND (text_value LIKE &#39;%http://feedburner.%&#39; OR text_value LIKE &#39;%http://feeds.feedburner.%&#39;);
</code></pre><ul>
<li>I can replace these globally using the following SQL:</li>
</ul>
<pre tabindex="0"><code>dspace=# UPDATE metadatavalue SET text_value = REGEXP_REPLACE(text_value, &#39;http://feedburner.&#39;,&#39;https//feedburner.&#39;, &#39;g&#39;) WHERE resource_type_id in (3,4) AND text_value LIKE &#39;%http://feedburner.%&#39;;
UPDATE 43
dspace=# UPDATE metadatavalue SET text_value = REGEXP_REPLACE(text_value, &#39;http://feeds.feedburner.&#39;,&#39;https//feeds.feedburner.&#39;, &#39;g&#39;) WHERE resource_type_id in (3,4) AND text_value LIKE &#39;%http://feeds.feedburner.%&#39;;
UPDATE 44
</code></pre><ul>
<li>I ran the corrections on CGSpace and DSpace Test</li>
</ul>
<h2 id="2019-03-10">2019-03-10</h2>
<ul>
<li>Working on tagging IITA&rsquo;s items with their new research theme (<code>cg.identifier.iitatheme</code>) based on their existing IITA subjects (see <a href="/cgspace-notes/2018-02/">notes from 2019-02</a>)</li>
<li>I exported the entire IITA community from CGSpace and then used <code>csvcut</code> to extract only the needed fields:</li>
</ul>
<pre tabindex="0"><code>$ csvcut -c &#39;id,cg.subject.iita,cg.subject.iita[],cg.subject.iita[en],cg.subject.iita[en_US]&#39; ~/Downloads/10568-68616.csv &gt; /tmp/iita.csv
</code></pre><ul>
<li>
<p>After importing to OpenRefine I realized that tagging items based on their subjects is tricky because of the row/record mode of OpenRefine when you split the multi-value cells as well as the fact that some items might need to be tagged twice (thus needing a <code>||</code>)</p>
</li>
<li>
<p>I think it might actually be easier to filter by IITA subject, then by IITA theme (if needed), and then do transformations with some conditional values in GREL expressions like:</p>
</li>
</ul>
<pre tabindex="0"><code>if(isBlank(value), &#39;PLANT PRODUCTION &amp; HEALTH&#39;, value + &#39;||PLANT PRODUCTION &amp; HEALTH&#39;)
</code></pre><ul>
<li>Then it&rsquo;s more annoying because there are four IITA subject columns&hellip;</li>
<li>In total this would add research themes to 1,755 items</li>
<li>I want to double check one last time with Bosede that they would like to do this, because I also see that this will tag a few hundred items from the 1970s and 1980s</li>
</ul>
<h2 id="2019-03-11">2019-03-11</h2>
<ul>
<li>Bosede said that she would like the IITA research theme tagging only for items since 2015, which would be 256 items</li>
</ul>
<h2 id="2019-03-12">2019-03-12</h2>
<ul>
<li>I imported the changes to 256 of IITA&rsquo;s records on CGSpace</li>
</ul>
<h2 id="2019-03-14">2019-03-14</h2>
<ul>
<li>CGSpace had the same issue with blank items like earlier this month and I restarted Tomcat to fix it</li>
<li>Create a pull request to change Swaziland to Eswatini and Macedonia to North Macedonia (<a href="https://github.com/ilri/DSpace/pull/414">#414</a>)
<ul>
<li>I see thirty-six items using Swaziland country metadata, and Peter says we should change only those from 2018 and 2019</li>
<li>I think that I could get the resource IDs from SQL and then export them using <code>dspace metadata-export</code>&hellip;</li>
</ul>
</li>
<li>This is a bit ugly, but it works (using the <a href="https://wiki.lyrasis.org/display/DSPACE/Helper+SQL+functions+for+DSpace+5">DSpace 5 SQL helper function</a> to resolve ID to handle):</li>
</ul>
<pre tabindex="0"><code>for id in $(psql -U postgres -d dspacetest -h localhost -c &#34;SELECT resource_id FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id=228 AND text_value LIKE &#39;%SWAZILAND%&#39;&#34; | grep -oE &#39;[0-9]{3,}&#39;); do
echo &#34;Getting handle for id: ${id}&#34;
handle=$(psql -U postgres -d dspacetest -h localhost -c &#34;SELECT ds5_item2itemhandle($id)&#34; | grep -oE &#39;[0-9]{5}/[0-9]+&#39;)
~/dspace/bin/dspace metadata-export -f /tmp/${id}.csv -i $handle
done
</code></pre><ul>
<li>Then I couldn&rsquo;t figure out a clever way to join all the CSVs, so I just grepped them to find the IDs with dates from 2018 and 2019 and there are apparently only three:</li>
</ul>
<pre tabindex="0"><code>$ grep -oE &#39;201[89]&#39; /tmp/*.csv | sort -u
/tmp/94834.csv:2018
/tmp/95615.csv:2018
/tmp/96747.csv:2018
</code></pre><ul>
<li>And looking at those items more closely, only one of them has an <em>issue date</em> of after 2018-04, so I will only update that one (as the countrie&rsquo;s name only changed in 2018-04)</li>
<li>Run all system updates and reboot linode20</li>
<li>Follow up with Felix from Earlham to see if he&rsquo;s done testing DSpace Test with COPO so I can re-sync the server from CGSpace</li>
</ul>
<h2 id="2019-03-15">2019-03-15</h2>
<ul>
<li>CGSpace (linode18) has the blank page error again</li>
<li>I&rsquo;m not sure if it&rsquo;s related, but I see the following error in DSpace&rsquo;s log:</li>
</ul>
<pre tabindex="0"><code>2019-03-15 14:09:32,685 ERROR org.dspace.storage.rdbms.DatabaseManager @ SQL QueryTable Error -
java.sql.SQLException: Connection org.postgresql.jdbc.PgConnection@55ba10b5 is closed.
at org.apache.tomcat.dbcp.dbcp.DelegatingConnection.checkOpen(DelegatingConnection.java:398)
at org.apache.tomcat.dbcp.dbcp.DelegatingConnection.prepareStatement(DelegatingConnection.java:279)
at org.apache.tomcat.dbcp.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.prepareStatement(PoolingDataSource.java:313)
at org.dspace.storage.rdbms.DatabaseManager.queryTable(DatabaseManager.java:220)
at org.dspace.authorize.AuthorizeManager.getPolicies(AuthorizeManager.java:612)
at org.dspace.content.crosswalk.METSRightsCrosswalk.disseminateElement(METSRightsCrosswalk.java:154)
at org.dspace.content.crosswalk.METSRightsCrosswalk.disseminateElement(METSRightsCrosswalk.java:300)
</code></pre><ul>
<li>Interestingly, I see a pattern of these errors increasing, with single and double digit numbers over the past month, <del>but spikes of over 1,000 today</del>, yesterday, and on 2019-03-08, which was exactly the first time we saw this blank page error recently</li>
</ul>
<pre tabindex="0"><code>$ grep -I &#39;SQL QueryTable Error&#39; dspace.log.2019-0* | awk -F: &#39;{print $1}&#39; | sort | uniq -c | tail -n 25
5 dspace.log.2019-02-27
11 dspace.log.2019-02-28
29 dspace.log.2019-03-01
24 dspace.log.2019-03-02
41 dspace.log.2019-03-03
11 dspace.log.2019-03-04
9 dspace.log.2019-03-05
15 dspace.log.2019-03-06
7 dspace.log.2019-03-07
9 dspace.log.2019-03-08
22 dspace.log.2019-03-09
23 dspace.log.2019-03-10
18 dspace.log.2019-03-11
13 dspace.log.2019-03-12
10 dspace.log.2019-03-13
25 dspace.log.2019-03-14
12 dspace.log.2019-03-15
67 dspace.log.2019-03-16
72 dspace.log.2019-03-17
8 dspace.log.2019-03-18
15 dspace.log.2019-03-19
21 dspace.log.2019-03-20
29 dspace.log.2019-03-21
41 dspace.log.2019-03-22
4807 dspace.log.2019-03-23
</code></pre><ul>
<li>(Update on 2019-03-23 to use correct grep query)</li>
<li>There are not too many connections currently in PostgreSQL:</li>
</ul>
<pre tabindex="0"><code>$ psql -c &#39;select * from pg_stat_activity&#39; | grep -o -E &#39;(dspaceWeb|dspaceApi|dspaceCli)&#39; | sort | uniq -c
6 dspaceApi
10 dspaceCli
15 dspaceWeb
</code></pre><ul>
<li>I didn&rsquo;t see anything interesting in the PostgreSQL logs, though this stack trace from the Tomcat logs (in the systemd journal) from earlier today <em>might</em> be related?</li>
</ul>
<pre tabindex="0"><code>SEVERE: Servlet.service() for servlet [spring] in context with path [] threw exception [org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.util.EmptyStackException] with root cause
java.util.EmptyStackException
at java.util.Stack.peek(Stack.java:102)
at java.util.Stack.pop(Stack.java:84)
at org.apache.cocoon.callstack.CallStack.leave(CallStack.java:54)
at org.apache.cocoon.servletservice.CallStackHelper.leaveServlet(CallStackHelper.java:85)
at org.apache.cocoon.servletservice.ServletServiceContext$PathDispatcher.forward(ServletServiceContext.java:484)
at org.apache.cocoon.servletservice.ServletServiceContext$PathDispatcher.forward(ServletServiceContext.java:443)
at org.apache.cocoon.servletservice.spring.ServletFactoryBean$ServiceInterceptor.invoke(ServletFactoryBean.java:264)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202)
at com.sun.proxy.$Proxy90.service(Unknown Source)
at org.dspace.springmvc.CocoonView.render(CocoonView.java:113)
at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1180)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:950)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:778)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:624)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.dspace.rdf.negotiation.NegotiationFilter.doFilter(NegotiationFilter.java:59)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.dspace.utils.servlet.DSpaceWebappServletFilter.doFilter(DSpaceWebappServletFilter.java:78)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:494)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104)
at org.apache.catalina.valves.CrawlerSessionManagerValve.invoke(CrawlerSessionManagerValve.java:234)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:1025)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:445)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1137)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:637)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:317)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
</code></pre><ul>
<li>For now I will just restart Tomcat&hellip;</li>
</ul>
<h2 id="2019-03-17">2019-03-17</h2>
<ul>
<li>Last week Felix from Earlham said that they finished testing on DSpace Test (linode19) so I made backups of some things there and re-deployed the system on Ubuntu 18.04
<ul>
<li>During re-deployment I hit a few issues with the <a href="https://github.com/ilri/rmg-ansible-public">Ansible playbooks</a> and made some minor improvements</li>
<li>There seems to be an <a href="https://bugs.launchpad.net/ubuntu/+source/nodejs/+bug/1794589">issue with nodejs&rsquo;s dependencies now</a>, which causes npm to get uninstalled when installing the certbot dependencies (due to a conflict in libssl dependencies)</li>
<li>I re-worked the playbooks to use Node.js from the upstream official repository for now</li>
</ul>
</li>
<li>Create and merge pull request for the AGROVOC controlled list (<a href="https://github.com/ilri/DSpace/pull/415">#415</a>)
<ul>
<li>Run all system updates on CGSpace (linode18) and re-deploy the <code>5_x-prod</code> branch and reboot the server</li>
</ul>
</li>
<li>Re-sync DSpace Test with a fresh database snapshot and assetstore from CGSpace
<ul>
<li>After restarting Tomcat, Solr was giving the &ldquo;Error opening new searcher&rdquo; error for all cores</li>
<li>I stopped Tomcat, added <code>ulimit -v unlimited</code> to the <code>catalina.sh</code> script and deleted all old locks in the DSpace <code>solr</code> directory and then DSpace started up normally</li>
<li>I&rsquo;m still not exactly sure why I see this error and if the <code>ulimit</code> trick actually helps, as the <code>tomcat7.service</code> has <code>LimitAS=infinity</code> anyways (and from checking the PID&rsquo;s limits file in <code>/proc</code> it seems to be applied)</li>
<li>Then I noticed that the item displays were blank&hellip; so I checked the database info and saw there were some unfinished migrations</li>
<li>I&rsquo;m not entirely sure if it&rsquo;s related, but I tried to delete the old migrations and then force running the ignored ones like when we upgraded to <a href="/cgspace-notes/2018-06/">DSpace 5.8 in 2018-06</a> and then after restarting Tomcat I could see the item displays again</li>
</ul>
</li>
<li>I copied the 2019 Solr statistics core from CGSpace to DSpace Test and it works (and is only 5.5GB currently), so now we have some useful stats on DSpace Test for the CUA module and the dspace-statistics-api</li>
<li>I ran DSpace&rsquo;s cleanup task on CGSpace (linode18) and there were errors:</li>
</ul>
<pre tabindex="0"><code>$ dspace cleanup -v
Error: ERROR: update or delete on table &#34;bitstream&#34; violates foreign key constraint &#34;bundle_primary_bitstream_id_fkey&#34; on table &#34;bundle&#34;
Detail: Key (bitstream_id)=(164496) is still referenced from table &#34;bundle&#34;.
</code></pre><ul>
<li>The solution is, as always:</li>
</ul>
<pre tabindex="0"><code># su - postgres
$ psql dspace -c &#39;update bundle set primary_bitstream_id=NULL where primary_bitstream_id in (164496);&#39;
UPDATE 1
</code></pre><h2 id="2019-03-18">2019-03-18</h2>
<ul>
<li>I noticed that the regular expression for validating lines from input files in my <code>agrovoc-lookup.py</code> script was skipping characters with accents, etc, so I changed it to use the <code>\w</code> character class for words instead of trying to match <code>[A-Z]</code> etc&hellip;
<ul>
<li>We have a Spanish and French subjects so this is very important</li>
<li>Also there were some subjects with apostrophes, dashes, and periods&hellip; these are probably invalid AGROVOC subject terms, but we should save them to the rejects file instead of skipping them nevertheless</li>
</ul>
</li>
<li>Dump top 1500 subjects from CGSpace to try one more time to generate a list of invalid terms using my <code>agrovoc-lookup.py</code> script:</li>
</ul>
<pre tabindex="0"><code>dspace=# \COPY (SELECT DISTINCT text_value, count(*) FROM metadatavalue WHERE metadata_field_id = 57 AND resource_type_id = 2 GROUP BY text_value ORDER BY count DESC LIMIT 1500) to /tmp/2019-03-18-top-1500-subject.csv WITH CSV HEADER;
COPY 1500
dspace=# \q
$ csvcut -c text_value /tmp/2019-03-18-top-1500-subject.csv &gt; 2019-03-18-top-1500-subject.csv
$ ./agrovoc-lookup.py -l en -i 2019-03-18-top-1500-subject.csv -om /tmp/en-subjects-matched.txt -or /tmp/en-subjects-unmatched.txt
$ ./agrovoc-lookup.py -l es -i 2019-03-18-top-1500-subject.csv -om /tmp/es-subjects-matched.txt -or /tmp/es-subjects-unmatched.txt
$ ./agrovoc-lookup.py -l fr -i 2019-03-18-top-1500-subject.csv -om /tmp/fr-subjects-matched.txt -or /tmp/fr-subjects-unmatched.txt
$ cat /tmp/*-subjects-matched.txt | sort -u &gt; /tmp/subjects-matched-sorted.txt
$ wc -l /tmp/subjects-matched-sorted.txt
1318 /tmp/subjects-matched-sorted.txt
$ sort -u 2019-03-18-top-1500-subject.csv &gt; /tmp/1500-subjects-sorted.txt
$ comm -13 /tmp/subjects-matched-sorted.txt /tmp/1500-subjects-sorted.txt &gt; 2019-03-18-subjects-unmatched.txt
$ wc -l 2019-03-18-subjects-unmatched.txt
182 2019-03-18-subjects-unmatched.txt
</code></pre><ul>
<li>So the new total of matched terms with the updated regex is 1317 and unmatched is 183 (previous number of matched terms was 1187)</li>
<li>Create and merge a pull request to update the controlled vocabulary for AGROVOC terms (<a href="https://github.com/ilri/DSpace/pull/416">#416</a>)</li>
<li>We are getting the blank page issue on CGSpace again today and I see a <del>large number</del> of the &ldquo;SQL QueryTable Error&rdquo; in the DSpace log again (last time was 2019-03-15):</li>
</ul>
<pre tabindex="0"><code>$ grep -c &#39;SQL QueryTable Error&#39; dspace.log.2019-03-1[5678]
dspace.log.2019-03-15:929
dspace.log.2019-03-16:67
dspace.log.2019-03-17:72
dspace.log.2019-03-18:1038
</code></pre><ul>
<li>Though WTF, this grep seems to be giving weird inaccurate results actually, and the real number of errors is much lower if I exclude the &ldquo;binary file matches&rdquo; result with <code>-I</code>:</li>
</ul>
<pre tabindex="0"><code>$ grep -I &#39;SQL QueryTable Error&#39; dspace.log.2019-03-18 | wc -l
8
$ grep -I &#39;SQL QueryTable Error&#39; dspace.log.2019-03-{08,14,15,16,17,18} | awk -F: &#39;{print $1}&#39; | sort | uniq -c
9 dspace.log.2019-03-08
25 dspace.log.2019-03-14
12 dspace.log.2019-03-15
67 dspace.log.2019-03-16
72 dspace.log.2019-03-17
8 dspace.log.2019-03-18
</code></pre><ul>
<li>It seems to be something with grep doing binary matching on some log files for some reason, so I guess I need to always use <code>-I</code> to say binary files don&rsquo;t match</li>
<li>Anyways, the full error in DSpace&rsquo;s log is:</li>
</ul>
<pre tabindex="0"><code>2019-03-18 12:26:23,331 ERROR org.dspace.storage.rdbms.DatabaseManager @ SQL QueryTable Error -
java.sql.SQLException: Connection org.postgresql.jdbc.PgConnection@75eaa668 is closed.
at org.apache.tomcat.dbcp.dbcp.DelegatingConnection.checkOpen(DelegatingConnection.java:398)
at org.apache.tomcat.dbcp.dbcp.DelegatingConnection.prepareStatement(DelegatingConnection.java:279)
at org.apache.tomcat.dbcp.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.prepareStatement(PoolingDataSource.java:313)
at org.dspace.storage.rdbms.DatabaseManager.queryTable(DatabaseManager.java:220)
</code></pre><ul>
<li>There is a low number of connections to PostgreSQL currently:</li>
</ul>
<pre tabindex="0"><code>$ psql -c &#39;select * from pg_stat_activity&#39; | wc -l
33
$ psql -c &#39;select * from pg_stat_activity&#39; | grep -o -E &#39;(dspaceWeb|dspaceApi|dspaceCli)&#39; | sort | uniq -c
6 dspaceApi
7 dspaceCli
15 dspaceWeb
</code></pre><ul>
<li>I looked in the PostgreSQL logs, but all I see are a bunch of these errors going back two months to January:</li>
</ul>
<pre tabindex="0"><code>2019-01-13 06:25:13.062 CET [9157] postgres@template1 ERROR: column &#34;waiting&#34; does not exist at character 217
</code></pre><ul>
<li>This is unrelated and apparently due to <a href="https://github.com/munin-monitoring/munin/issues/746">Munin checking a column that was changed in PostgreSQL 9.6</a></li>
<li>I suspect that this issue with the blank pages might not be PostgreSQL after all, perhaps it&rsquo;s a Cocoon thing?</li>
<li>Looking in the cocoon logs I see a large number of warnings about &ldquo;Can not load requested doc&rdquo; around 11AM and 12PM:</li>
</ul>
<pre tabindex="0"><code>$ grep &#39;Can not load requested doc&#39; cocoon.log.2019-03-18 | grep -oE &#39;2019-03-18 [0-9]{2}:&#39; | sort | uniq -c
2 2019-03-18 00:
6 2019-03-18 02:
3 2019-03-18 04:
1 2019-03-18 05:
1 2019-03-18 07:
2 2019-03-18 08:
4 2019-03-18 09:
5 2019-03-18 10:
863 2019-03-18 11:
203 2019-03-18 12:
14 2019-03-18 13:
1 2019-03-18 14:
</code></pre><ul>
<li>And a few days ago on 2019-03-15 when I happened last it was in the afternoon when it happened and the same pattern occurs then around 12PM:</li>
</ul>
<pre tabindex="0"><code>$ xzgrep &#39;Can not load requested doc&#39; cocoon.log.2019-03-15.xz | grep -oE &#39;2019-03-15 [0-9]{2}:&#39; | sort | uniq -c
4 2019-03-15 01:
3 2019-03-15 02:
1 2019-03-15 03:
13 2019-03-15 04:
1 2019-03-15 05:
2 2019-03-15 06:
3 2019-03-15 07:
27 2019-03-15 09:
9 2019-03-15 10:
3 2019-03-15 11:
2 2019-03-15 12:
531 2019-03-15 13:
274 2019-03-15 14:
4 2019-03-15 15:
75 2019-03-15 16:
5 2019-03-15 17:
5 2019-03-15 18:
6 2019-03-15 19:
2 2019-03-15 20:
4 2019-03-15 21:
3 2019-03-15 22:
1 2019-03-15 23:
</code></pre><ul>
<li>And again on 2019-03-08, surprise surprise, it happened in the morning:</li>
</ul>
<pre tabindex="0"><code>$ xzgrep &#39;Can not load requested doc&#39; cocoon.log.2019-03-08.xz | grep -oE &#39;2019-03-08 [0-9]{2}:&#39; | sort | uniq -c
11 2019-03-08 01:
3 2019-03-08 02:
1 2019-03-08 03:
2 2019-03-08 04:
1 2019-03-08 05:
1 2019-03-08 06:
1 2019-03-08 08:
425 2019-03-08 09:
432 2019-03-08 10:
717 2019-03-08 11:
59 2019-03-08 12:
</code></pre><ul>
<li>I&rsquo;m not sure if it&rsquo;s cocoon or that&rsquo;s just a symptom of something else</li>
</ul>
<h2 id="2019-03-19">2019-03-19</h2>
<ul>
<li>I found a handful of AGROVOC subjects that use a non-breaking space (0x00a0) instead of a regular space, which makes for a pretty confusing debugging&hellip;</li>
<li>I will replace these in the database immediately to save myself the headache later:</li>
</ul>
<pre tabindex="0"><code>dspace=# SELECT count(text_value) FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id = 57 AND text_value ~ &#39;.+\u00a0.+&#39;;
count
-------
84
(1 row)
</code></pre><ul>
<li>Perhaps my <code>agrovoc-lookup.py</code> script could notify if it finds these because they potentially give false negatives</li>
<li>CGSpace (linode18) is having problems with Solr again, I&rsquo;m seeing &ldquo;Error opening new searcher&rdquo; in the Solr logs and there are no stats for previous years</li>
<li>Apparently the Solr statistics shards didn&rsquo;t load properly when we restarted Tomcat <em>yesterday</em>:</li>
</ul>
<pre tabindex="0"><code>2019-03-18 12:32:39,799 ERROR org.apache.solr.core.CoreContainer @ Error creating core [statistics-2018]: Error opening new searcher
...
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
at org.apache.solr.core.SolrCore.&lt;init&gt;(SolrCore.java:845)
... 31 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/home/cgspace.cgiar.org/solr/statistics-2018/data/index/write.lock
</code></pre><ul>
<li>For reference, I don&rsquo;t see the <code>ulimit -v unlimited</code> in the <code>catalina.sh</code> script, though the <code>tomcat7</code> systemd service has <code>LimitAS=infinity</code></li>
<li>The limits of the current Tomcat java process are:</li>
</ul>
<pre tabindex="0"><code># cat /proc/27182/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 128589 128589 processes
Max open files 16384 16384 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 128589 128589 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
</code></pre><ul>
<li>I will try to add <code>ulimit -v unlimited</code> to the Catalina startup script and check the output of the limits to see if it&rsquo;s different in practice, as some wisdom on Stack Overflow says this solves the Solr core issues and I&rsquo;ve superstitiously tried it various times in the past
<ul>
<li>The result is the same before and after, so <em>adding the ulimit directly is unneccessary</em> (whether or not unlimited address space is useful or not is another question)</li>
</ul>
</li>
<li>For now I will just stop Tomcat, delete Solr locks, then start Tomcat again:</li>
</ul>
<pre tabindex="0"><code># systemctl stop tomcat7
# find /home/cgspace.cgiar.org/solr/ -iname &#34;*.lock&#34; -delete
# systemctl start tomcat7
</code></pre><ul>
<li>After restarting I confirmed that all Solr statistics cores were loaded successfully&hellip;</li>
<li>Another avenue might be to look at point releases in Solr 4.10.x, as we&rsquo;re running 4.10.2 and they released 4.10.3 and 4.10.4 back in 2014 or 2015
<ul>
<li>I see several issues regarding locks and IndexWriter that were fixed in Solr and Lucene 4.10.3 and 4.10.4&hellip;</li>
</ul>
</li>
<li>I sent a mail to the dspace-tech mailing list to ask about Solr issues</li>
<li>Testing Solr 4.10.4 on DSpace 5.8:
<ul>
<li><input checked="" disabled="" type="checkbox"> Discovery indexing</li>
<li><input checked="" disabled="" type="checkbox"> dspace-statistics-api indexer</li>
<li><input checked="" disabled="" type="checkbox"> /solr admin UI</li>
</ul>
</li>
</ul>
<h2 id="2019-03-20">2019-03-20</h2>
<ul>
<li>Create a branch for Solr 4.10.4 changes so I can test on DSpace Test (linode19)
<ul>
<li>Deployed Solr 4.10.4 on DSpace Test and will leave it there for a few weeks, as well as on my local environment</li>
</ul>
</li>
</ul>
<h2 id="2019-03-21">2019-03-21</h2>
<ul>
<li>It&rsquo;s been two days since we had the blank page issue on CGSpace, and looking in the Cocoon logs I see very low numbers of the errors that we were seeing the last time the issue occurred:</li>
</ul>
<pre tabindex="0"><code>$ grep &#39;Can not load requested doc&#39; cocoon.log.2019-03-20 | grep -oE &#39;2019-03-20 [0-9]{2}:&#39; | sort | uniq -c
3 2019-03-20 00:
12 2019-03-20 02:
$ grep &#39;Can not load requested doc&#39; cocoon.log.2019-03-21 | grep -oE &#39;2019-03-21 [0-9]{2}:&#39; | sort | uniq -c
4 2019-03-21 00:
1 2019-03-21 02:
4 2019-03-21 03:
1 2019-03-21 05:
4 2019-03-21 06:
11 2019-03-21 07:
14 2019-03-21 08:
3 2019-03-21 09:
4 2019-03-21 10:
5 2019-03-21 11:
4 2019-03-21 12:
3 2019-03-21 13:
6 2019-03-21 14:
2 2019-03-21 15:
3 2019-03-21 16:
3 2019-03-21 18:
1 2019-03-21 19:
6 2019-03-21 20:
</code></pre><ul>
<li>To investigate the Solr lock issue I added a <code>find</code> command to the Tomcat 7 service with <code>ExecStartPre</code> and <code>ExecStopPost</code> and noticed that the lock files are always there&hellip;
<ul>
<li>Perhaps the lock files are less of an issue than I thought?</li>
<li>I will share my thoughts with the dspace-tech community</li>
</ul>
</li>
<li>In other news, I notice that that systemd always thinks that Tomcat has failed when it stops because the JVM exits with code 143, which is apparently normal when processes gracefully receive a SIGTERM (128 + 15 == 143)
<ul>
<li>We can add <code>SuccessExitStatus=143</code> to the systemd service so that it knows this is a successful exit</li>
</ul>
</li>
</ul>
<h2 id="2019-03-22">2019-03-22</h2>
<ul>
<li>Share the initial list of invalid AGROVOC terms on Yammer to ask the editors for help in correcting them</li>
<li>Advise Phanuel Ayuka from IITA about using controlled vocabularies in DSpace</li>
</ul>
<h2 id="2019-03-23">2019-03-23</h2>
<ul>
<li>CGSpace (linode18) is having the blank page issue again and it seems to have started last night around 21:00:</li>
</ul>
<pre tabindex="0"><code>$ grep &#39;Can not load requested doc&#39; cocoon.log.2019-03-22 | grep -oE &#39;2019-03-22 [0-9]{2}:&#39; | sort | uniq -c
2 2019-03-22 00:
69 2019-03-22 01:
1 2019-03-22 02:
13 2019-03-22 03:
2 2019-03-22 05:
2 2019-03-22 06:
8 2019-03-22 07:
4 2019-03-22 08:
12 2019-03-22 09:
7 2019-03-22 10:
1 2019-03-22 11:
2 2019-03-22 12:
14 2019-03-22 13:
4 2019-03-22 15:
7 2019-03-22 16:
7 2019-03-22 17:
3 2019-03-22 18:
3 2019-03-22 19:
7 2019-03-22 20:
323 2019-03-22 21:
685 2019-03-22 22:
357 2019-03-22 23:
$ grep &#39;Can not load requested doc&#39; cocoon.log.2019-03-23 | grep -oE &#39;2019-03-23 [0-9]{2}:&#39; | sort | uniq -c
575 2019-03-23 00:
445 2019-03-23 01:
518 2019-03-23 02:
436 2019-03-23 03:
387 2019-03-23 04:
593 2019-03-23 05:
468 2019-03-23 06:
541 2019-03-23 07:
440 2019-03-23 08:
260 2019-03-23 09:
</code></pre><ul>
<li>I was curious to see if clearing the Cocoon cache in the XMLUI control panel would fix it, but it didn&rsquo;t</li>
<li>Trying to drill down more, I see that the bulk of the errors started aroundi 21:20:</li>
</ul>
<pre tabindex="0"><code>$ grep &#39;Can not load requested doc&#39; cocoon.log.2019-03-22 | grep -oE &#39;2019-03-22 21:[0-9]&#39; | sort | uniq -c
1 2019-03-22 21:0
1 2019-03-22 21:1
59 2019-03-22 21:2
69 2019-03-22 21:3
89 2019-03-22 21:4
104 2019-03-22 21:5
</code></pre><ul>
<li>Looking at the Cocoon log around that time I see the full error is:</li>
</ul>
<pre tabindex="0"><code>2019-03-22 21:21:34,378 WARN org.apache.cocoon.components.xslt.TraxErrorListener - Can not load requested doc: unknown protocol: cocoon at jndi:/localhost/themes/CIAT/xsl/../../0_CGIAR/xsl//aspect/artifactbrowser/common.xsl:141:90
</code></pre><ul>
<li>A few milliseconds before that time I see this in the DSpace log:</li>
</ul>
<pre tabindex="0"><code>2019-03-22 21:21:34,356 ERROR org.dspace.storage.rdbms.DatabaseManager @ SQL QueryTable Error -
org.postgresql.util.PSQLException: This statement has been closed.
at org.postgresql.jdbc.PgStatement.checkClosed(PgStatement.java:694)
at org.postgresql.jdbc.PgStatement.getMaxRows(PgStatement.java:501)
at org.postgresql.jdbc.PgStatement.createResultSet(PgStatement.java:153)
at org.postgresql.jdbc.PgStatement$StatementResultHandler.handleResultRows(PgStatement.java:204)
at org.postgresql.core.ResultHandlerDelegate.handleResultRows(ResultHandlerDelegate.java:29)
at org.postgresql.core.v3.QueryExecutorImpl$1.handleResultRows(QueryExecutorImpl.java:528)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2120)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:143)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:106)
at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
at org.dspace.storage.rdbms.DatabaseManager.queryTable(DatabaseManager.java:224)
at org.dspace.storage.rdbms.DatabaseManager.querySingleTable(DatabaseManager.java:375)
at org.dspace.storage.rdbms.DatabaseManager.findByUnique(DatabaseManager.java:544)
at org.dspace.storage.rdbms.DatabaseManager.find(DatabaseManager.java:501)
at org.dspace.eperson.Group.find(Group.java:706)
...
2019-03-22 21:21:34,381 ERROR org.dspace.storage.rdbms.DatabaseManager @ SQL query singleTable Error -
org.postgresql.util.PSQLException: This statement has been closed.
at org.postgresql.jdbc.PgStatement.checkClosed(PgStatement.java:694)
at org.postgresql.jdbc.PgStatement.getMaxRows(PgStatement.java:501)
at org.postgresql.jdbc.PgStatement.createResultSet(PgStatement.java:153)
...
2019-03-22 21:21:34,386 ERROR org.dspace.storage.rdbms.DatabaseManager @ SQL findByUnique Error -
org.postgresql.util.PSQLException: This statement has been closed.
at org.postgresql.jdbc.PgStatement.checkClosed(PgStatement.java:694)
at org.postgresql.jdbc.PgStatement.getMaxRows(PgStatement.java:501)
at org.postgresql.jdbc.PgStatement.createResultSet(PgStatement.java:153)
...
2019-03-22 21:21:34,395 ERROR org.dspace.storage.rdbms.DatabaseManager @ SQL find Error -
org.postgresql.util.PSQLException: This statement has been closed.
at org.postgresql.jdbc.PgStatement.checkClosed(PgStatement.java:694)
at org.postgresql.jdbc.PgStatement.getMaxRows(PgStatement.java:501)
at org.postgresql.jdbc.PgStatement.createResultSet(PgStatement.java:153)
at org.postgresql.jdbc.PgStatement$StatementResultHandler.handleResultRows(PgStatement.java:204)
</code></pre><ul>
<li>
<p>I restarted Tomcat and now the item displays are working again for now</p>
</li>
<li>
<p>I am wondering if this is an issue with removing abandoned connections in Tomcat&rsquo;s JDBC pooling?</p>
<ul>
<li>It&rsquo;s hard to tell because we have <code>logAbanded</code> enabled, but I don&rsquo;t see anything in the <code>tomcat7</code> service logs in the systemd journal</li>
</ul>
</li>
<li>
<p>I sent another mail to the dspace-tech mailing list with my observations</p>
</li>
<li>
<p>I spent some time trying to test and debug the Tomcat connection pool&rsquo;s settings, but for some reason our logs are either messed up or no connections are actually getting abandoned</p>
</li>
<li>
<p>I compiled this <a href="https://github.com/gnosly/TomcatJdbcConnectionTest">TomcatJdbcConnectionTest</a> and created a bunch of database connections and waited a few minutes but they never got abandoned until I created over <code>maxActive</code> (75), after which almost all were purged at once</p>
<ul>
<li>So perhaps our settings are not working right, but at least I know the logging works now&hellip;</li>
</ul>
</li>
</ul>
<h2 id="2019-03-24">2019-03-24</h2>
<ul>
<li>I did some more tests with the <a href="https://github.com/gnosly/TomcatJdbcConnectionTest">TomcatJdbcConnectionTest</a> thing and while monitoring the number of active connections in jconsole and after adjusting the limits quite low I eventually saw some connections get abandoned</li>
<li>I forgot that to connect to a remote JMX session with jconsole you need to use a dynamic SSH SOCKS proxy (as I originally <a href="/cgspace-notes/2017-11/">discovered in 2017-11</a>:</li>
</ul>
<pre tabindex="0"><code>$ jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=3000 service:jmx:rmi:///jndi/rmi://localhost:5400/jmxrmi -J-DsocksNonProxyHosts=
</code></pre><ul>
<li>I need to remember to check the active connections next time we have issues with blank item pages on CGSpace</li>
<li>In other news, I&rsquo;ve been running G1GC on DSpace Test (linode19) since 2018-11-08 without realizing it, which is probably a good thing</li>
<li>I deployed the latest <code>5_x-prod</code> branch on CGSpace (linode18) and added more validation to the JDBC pool in our Tomcat config
<ul>
<li>This includes the new <code>testWhileIdle</code> and <code>testOnConnect</code> pool settings as well as the two new JDBC interceptors: <code>StatementFinalizer</code> and <code>ConnectionState</code> that should hopefully make sure our connections in the pool are valid</li>
</ul>
</li>
<li>I spent one hour looking at the invalid AGROVOC terms from last week
<ul>
<li>It doesn&rsquo;t seem like any of the editors did any work on this so I did most of them</li>
</ul>
</li>
</ul>
<h2 id="2019-03-25">2019-03-25</h2>
<ul>
<li>Finish looking over the 175 invalid AGROVOC terms
<ul>
<li>I need to apply the corrections and deletions this week</li>
</ul>
</li>
<li>Looking at the DBCP status on CGSpace via jconsole and everything looks good, though I wonder why <code>timeBetweenEvictionRunsMillis</code> is -1, because the <a href="https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html">Tomcat 7.0 JDBC docs</a> say the default is 5000&hellip;
<ul>
<li>Could be an error in the docs, as I see the <a href="https://commons.apache.org/proper/commons-dbcp/configuration.html">Apache Commons DBCP</a> has -1 as the default</li>
<li>Maybe I need to re-evaluate the &ldquo;defauts&rdquo; of Tomcat 7&rsquo;s DBCP and set them explicitly in our config</li>
<li>From Tomcat 8 they seem to default to Apache Commons&rsquo; DBCP 2.x</li>
</ul>
</li>
<li>Also, CGSpace doesn&rsquo;t have many Cocoon errors yet this morning:</li>
</ul>
<pre tabindex="0"><code>$ grep &#39;Can not load requested doc&#39; cocoon.log.2019-03-25 | grep -oE &#39;2019-03-25 [0-9]{2}:&#39; | sort | uniq -c
4 2019-03-25 00:
1 2019-03-25 01:
</code></pre><ul>
<li>Holy shit I just realized we&rsquo;ve been using the wrong DBCP pool in Tomcat
<ul>
<li>By default you get the Commons DBCP one unless you specify factory <code>org.apache.tomcat.jdbc.pool.DataSourceFactory</code></li>
<li>Now I see all my interceptor settings etc in jconsole, where I didn&rsquo;t see them before (also a new <code>tomcat.jdbc</code> mbean)!</li>
<li>No wonder our settings didn&rsquo;t quite match the ones in the <a href="https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html">Tomcat DBCP Pool docs</a></li>
</ul>
</li>
<li>Uptime Robot reported that CGSpace went down and I see the load is very high</li>
<li>The top IPs around the time in the nginx API and web logs were:</li>
</ul>
<pre tabindex="0"><code># zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &#34;25/Mar/2019:(18|19|20|21)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
9 190.252.43.162
12 157.55.39.140
18 157.55.39.54
21 66.249.66.211
27 40.77.167.185
29 138.220.87.165
30 157.55.39.168
36 157.55.39.9
50 52.23.239.229
2380 45.5.186.2
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;25/Mar/2019:(18|19|20|21)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
354 18.195.78.144
363 190.216.179.100
386 40.77.167.185
484 157.55.39.168
507 157.55.39.9
536 2a01:4f8:140:3192::2
1123 66.249.66.211
1186 93.179.69.74
1222 35.174.184.209
1720 2a01:4f8:13b:1296::2
</code></pre><ul>
<li>The IPs look pretty normal except we&rsquo;ve never seen <code>93.179.69.74</code> before, and it uses the following user agent:</li>
</ul>
<pre tabindex="0"><code>Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.20 Safari/535.1
</code></pre><ul>
<li>Surprisingly they are re-using their Tomcat session:</li>
</ul>
<pre tabindex="0"><code>$ grep -o -E &#39;session_id=[A-Z0-9]{32}:ip_addr=93.179.69.74&#39; dspace.log.2019-03-25 | sort | uniq | wc -l
1
</code></pre><ul>
<li>That&rsquo;s weird because the total number of sessions today seems low compared to recent days:</li>
</ul>
<pre tabindex="0"><code>$ grep -o -E &#39;session_id=[A-Z0-9]{32}&#39; dspace.log.2019-03-25 | sort -u | wc -l
5657
$ grep -o -E &#39;session_id=[A-Z0-9]{32}&#39; dspace.log.2019-03-24 | sort -u | wc -l
17710
$ grep -o -E &#39;session_id=[A-Z0-9]{32}&#39; dspace.log.2019-03-23 | sort -u | wc -l
17179
$ grep -o -E &#39;session_id=[A-Z0-9]{32}&#39; dspace.log.2019-03-22 | sort -u | wc -l
7904
</code></pre><ul>
<li>PostgreSQL seems to be pretty busy:</li>
</ul>
<pre tabindex="0"><code>$ psql -c &#39;select * from pg_stat_activity&#39; | grep -o -E &#39;(dspaceWeb|dspaceApi|dspaceCli)&#39; | sort | uniq -c
11 dspaceApi
10 dspaceCli
67 dspaceWeb
</code></pre><ul>
<li>I restarted Tomcat and deployed the new Tomcat JDBC settings on CGSpace since I had to restart the server anyways
<ul>
<li>I need to watch this carefully though because I&rsquo;ve read some places that Tomcat&rsquo;s DBCP doesn&rsquo;t track statements and might create memory leaks if an application doesn&rsquo;t close statements before a connection gets returned back to the pool</li>
</ul>
</li>
<li>According the Uptime Robot the server was up and down a few more times over the next hour so I restarted Tomcat again</li>
</ul>
<h2 id="2019-03-26">2019-03-26</h2>
<ul>
<li>UptimeRobot says CGSpace went down again and I see the load is again at 14.0!</li>
<li>Here are the top IPs in nginx logs in the last hour:</li>
</ul>
<pre tabindex="0"><code># zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &#34;26/Mar/2019:(06|07)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
3 35.174.184.209
3 66.249.66.81
4 104.198.9.108
4 154.77.98.122
4 2.50.152.13
10 196.188.12.245
14 66.249.66.80
414 45.5.184.72
535 45.5.186.2
2014 205.186.128.185
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;26/Mar/2019:(06|07)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
157 41.204.190.40
160 18.194.46.84
160 54.70.40.11
168 31.6.77.23
188 66.249.66.81
284 3.91.79.74
405 2a01:4f8:140:3192::2
471 66.249.66.80
712 35.174.184.209
784 2a01:4f8:13b:1296::2
</code></pre><ul>
<li>The two IPV6 addresses are something called BLEXBot, which seems to check the robots.txt file and then completely ignore it by making thousands of requests to dynamic pages like Browse and Discovery</li>
<li>Then <code>35.174.184.209</code> is MauiBot, which does the same thing</li>
<li>Also <code>3.91.79.74</code> does, which appears to be CCBot</li>
<li>I will add these three to the &ldquo;bad bot&rdquo; rate limiting that I originally used for Baidu</li>
<li>Going further, these are the IPs making requests to Discovery and Browse pages so far today:</li>
</ul>
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;(discover|browse)&#34; | grep -E &#34;26/Mar/2019:&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
120 34.207.146.166
128 3.91.79.74
132 108.179.57.67
143 34.228.42.25
185 216.244.66.198
430 54.70.40.11
1033 93.179.69.74
1206 2a01:4f8:140:3192::2
2678 2a01:4f8:13b:1296::2
3790 35.174.184.209
</code></pre><ul>
<li><code>54.70.40.11</code> is SemanticScholarBot</li>
<li><code>216.244.66.198</code> is DotBot</li>
<li><code>93.179.69.74</code> is some IP in Ukraine, which I will add to the list of bot IPs in nginx</li>
<li>I can only hope that this helps the load go down because all this traffic is disrupting the service for normal users and well-behaved bots (and interrupting my dinner and breakfast)</li>
<li>Looking at the database usage I&rsquo;m wondering why there are so many connections from the DSpace CLI:</li>
</ul>
<pre tabindex="0"><code>$ psql -c &#39;select * from pg_stat_activity&#39; | grep -o -E &#39;(dspaceWeb|dspaceApi|dspaceCli)&#39; | sort | uniq -c
5 dspaceApi
10 dspaceCli
13 dspaceWeb
</code></pre><ul>
<li>Looking closer I see they are all idle&hellip; so at least I know the load isn&rsquo;t coming from some background nightly task or something</li>
<li>Make a minor edit to my <code>agrovoc-lookup.py</code> script to match subject terms with parentheses like <code>COCOA (PLANT)</code></li>
<li>Test 89 corrections and 79 deletions for AGROVOC subject terms from the ones I cleaned up in the last week</li>
</ul>
<pre tabindex="0"><code>$ ./fix-metadata-values.py -i /tmp/2019-03-26-AGROVOC-89-corrections.csv -db dspace -u dspace -p &#39;fuuu&#39; -f dc.subject -m 57 -t correct -d -n
$ ./delete-metadata-values.py -i /tmp/2019-03-26-AGROVOC-79-deletions.csv -db dspace -u dspace -p &#39;fuuu&#39; -m 57 -f dc.subject -d -n
</code></pre><ul>
<li>UptimeRobot says CGSpace is down again, but it seems to just be slow, as the load is over 10.0</li>
<li>Looking at the nginx logs I don&rsquo;t see anything terribly abusive, but SemrushBot has made ~3,000 requests to Discovery and Browse pages today:</li>
</ul>
<pre tabindex="0"><code># grep SemrushBot /var/log/nginx/access.log | grep -E &#34;26/Mar/2019&#34; | grep -E &#39;(discover|browse)&#39; | wc -l
2931
</code></pre><ul>
<li>So I&rsquo;m adding it to the badbot rate limiting in nginx, and actually, I kinda feel like just blocking all user agents with &ldquo;bot&rdquo; in the name for a few days to see if things calm down&hellip; maybe not just yet</li>
<li>Otherwise, these are the top users in the web and API logs the last hour (1819):</li>
</ul>
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E &#34;26/Mar/2019:(18|19)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
54 41.216.228.158
65 199.47.87.140
75 157.55.39.238
77 157.55.39.237
89 157.55.39.236
100 18.196.196.108
128 18.195.78.144
277 2a01:4f8:13b:1296::2
291 66.249.66.80
328 35.174.184.209
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E &#34;26/Mar/2019:(18|19)&#34; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 10
2 2409:4066:211:2caf:3c31:3fae:2212:19cc
2 35.10.204.140
2 45.251.231.45
2 95.108.181.88
2 95.137.190.2
3 104.198.9.108
3 107.167.109.88
6 66.249.66.80
13 41.89.230.156
1860 45.5.184.2
</code></pre><ul>
<li>For the XMLUI I see <code>18.195.78.144</code> and <code>18.196.196.108</code> requesting only CTA items and with no user agent</li>
<li>They are responsible for almost 1,000 XMLUI sessions today:</li>
</ul>
<pre tabindex="0"><code>$ grep -o -E &#39;session_id=[A-Z0-9]{32}:ip_addr=(18.195.78.144|18.196.196.108)&#39; dspace.log.2019-03-26 | sort | uniq | wc -l
937
</code></pre><ul>
<li>I will add their IPs to the list of bot IPs in nginx so I can tag them as bots to let Tomcat&rsquo;s Crawler Session Manager Valve to force them to re-use their session</li>
<li>Another user agent behaving badly in Colombia is &ldquo;GuzzleHttp/6.3.3 curl/7.47.0 PHP/7.0.30-0ubuntu0.16.04.1&rdquo;</li>
<li>I will add curl to the Tomcat Crawler Session Manager because anyone using curl is most likely an automated read-only request</li>
<li>I will add GuzzleHttp to the nginx badbots rate limiting, because it is making requests to dynamic Discovery pages</li>
</ul>
<pre tabindex="0"><code># zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep 45.5.184.72 | grep -E &#34;26/Mar/2019:&#34; | grep -E &#39;(discover|browse)&#39; | wc -l
119
</code></pre><ul>
<li>What&rsquo;s strange is that I can&rsquo;t see any of their requests in the DSpace log&hellip;</li>
</ul>
<pre tabindex="0"><code>$ grep -I -c 45.5.184.72 dspace.log.2019-03-26
0
</code></pre><h2 id="2019-03-28">2019-03-28</h2>
<ul>
<li>Run the corrections and deletions to AGROVOC (dc.subject) on DSpace Test and CGSpace, and then start a full re-index of Discovery</li>
<li>What the hell is going on with this CTA publication?</li>
</ul>
<pre tabindex="0"><code># grep Spore-192-EN-web.pdf /var/log/nginx/access.log | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n
1 37.48.65.147
1 80.113.172.162
2 108.174.5.117
2 83.110.14.208
4 18.196.8.188
84 18.195.78.144
644 18.194.46.84
1144 18.196.196.108
</code></pre><ul>
<li>None of these 18.x.x.x IPs specify a user agent and they are all on Amazon!</li>
<li>Shortly after I started the re-indexing UptimeRobot began to complain that CGSpace was down, then up, then down, then up&hellip;</li>
<li>I see the load on the server is about 10.0 again for some reason though I don&rsquo;t know WHAT is causing that load
<ul>
<li>It could be the CPU steal metric, as if Linode has oversold the CPU resources on this VM host&hellip;</li>
</ul>
</li>
<li>Here are the Munin graphs of CPU usage for the last day, week, and year:</li>
</ul>
<p><img src="/cgspace-notes/2019/03/cpu-day-fs8.png" alt="CPU day"></p>
<p><img src="/cgspace-notes/2019/03/cpu-week-fs8.png" alt="CPU week"></p>
<p><img src="/cgspace-notes/2019/03/cpu-year-fs8.png" alt="CPU year"></p>
<ul>
<li>What&rsquo;s clear from this is that some other VM on our host has heavy usage for about four hours at 6AM and 6PM and that during that time the load on our server spikes
<ul>
<li>CPU steal has drastically increased since March 25th</li>
<li>It might be time to move to a dedicated CPU VM instances, or even real servers</li>
<li>For now I just sent a support ticket to bring this to Linode&rsquo;s attention</li>
</ul>
</li>
<li>In other news, I see that it&rsquo;s not even the end of the month yet and we have 3.6 million hits already:</li>
</ul>
<pre tabindex="0"><code># zcat --force /var/log/nginx/* | grep -cE &#34;[0-9]{1,2}/Mar/2019&#34;
3654911
</code></pre><ul>
<li>In other other news I see that DSpace has no statistics for years before 2019 currently, yet when I connect to Solr I see all the cores up</li>
</ul>
<h2 id="2019-03-29">2019-03-29</h2>
<ul>
<li>Sent Linode more information from <code>top</code> and <code>iostat</code> about the resource usage on linode18
<ul>
<li>Linode agreed that the CPU steal percentage was high and migrated the VM to a new host</li>
<li>Now the resource contention is much lower according to <code>iostat 1 10</code></li>
</ul>
</li>
<li>I restarted Tomcat to see if I could fix the missing pre-2019 statistics (yes it fixed it)
<ul>
<li>Though I looked in the Solr Admin UI and noticed a logging dashboard that show warnings and errors, and the first concerning Solr cores was on 3/27/2019, 8:50:35 AM so I should check the logs around that time to see if something happened</li>
</ul>
</li>
</ul>
<h2 id="2019-03-31">2019-03-31</h2>
<ul>
<li>After a few days of the CGSpace VM (linode18) being migrated to a new host the CPU steal is gone and the site is much more responsive</li>
</ul>
<p><img src="/cgspace-notes/2019/03/cpu-week-migrated.png" alt="linode18 CPU usage after migration"></p>
<ul>
<li>It is frustrating to see that the load spikes for own own legitimate load on the server were <em>very</em> aggravated and drawn out by the contention for CPU on this host</li>
<li>We had 4.2 million hits this month according to the web server logs:</li>
</ul>
<pre tabindex="0"><code># time zcat --force /var/log/nginx/* | grep -cE &#34;[0-9]{1,2}/Mar/2019&#34;
4218841
real 0m26.609s
user 0m31.657s
sys 0m2.551s
</code></pre><ul>
<li>Interestingly, now that the CPU steal is not an issue the REST API is ten seconds faster than it was in <a href="/cgspace-notes/2018-10/">2018-10</a>:</li>
</ul>
<pre tabindex="0"><code>$ time http --print h &#39;https://cgspace.cgiar.org/rest/items?expand=metadata,bitstreams,parentCommunityList&amp;limit=100&amp;offset=0&#39;
...
0.33s user 0.07s system 2% cpu 17.167 total
0.27s user 0.04s system 1% cpu 16.643 total
0.24s user 0.09s system 1% cpu 17.764 total
0.25s user 0.06s system 1% cpu 15.947 total
</code></pre><ul>
<li>I did some research on dedicated servers to potentially replace Linode for CGSpace stuff and it seems Hetzner is pretty good
<ul>
<li>This <a href="https://www.hetzner.com/dedicated-rootserver/px62-nvme">PX62-NVME system</a> looks great an is half the price of our current Linode instance</li>
<li>It has 64GB of ECC RAM, six core Xeon processor from 2018, and 2x960GB NVMe storage</li>
<li>The alternative of staying with Linode and using dedicated CPU instances with added block storage gets expensive quickly if we want to keep more than 16GB of RAM (do we?)
<ul>
<li>Regarding RAM, our JVM heap is 8GB and we leave the rest of the system&rsquo;s 32GB of RAM to PostgreSQL and Solr buffers</li>
<li>Seeing as we have 56GB of Solr data it might be better to have more RAM in order to keep more of it in memory</li>
<li>Also, I know that the Linode block storage is a major bottleneck for Solr indexing</li>
</ul>
</li>
</ul>
</li>
<li>Looking at the weird issue with shitloads of downloads on the <a href="https://cgspace.cgiar.org/handle/10568/100289">CTA item</a> again</li>
<li>The item was added on 2019-03-13 and these three IPs have attempted to download the item&rsquo;s bitstream 43,000 times since it was added eighteen days ago:</li>
</ul>
<pre tabindex="0"><code># zcat --force /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/access.log.{2..17}.gz | grep &#39;Spore-192-EN-web.pdf&#39; | awk &#39;{print $1}&#39; | sort | uniq -c | sort -n | tail -n 5
42 196.43.180.134
621 185.247.144.227
8102 18.194.46.84
14927 18.196.196.108
20265 18.195.78.144
</code></pre><ul>
<li>I will send a mail to CTA to ask if they know these IPs</li>
<li>I wonder if the Cocoon errors we had earlier this month were inadvertently related to the CPU steal issue&hellip; I see very low occurrences of the &ldquo;Can not load requested doc&rdquo; error in the Cocoon logs the past few days</li>
<li>Helping Perttu debug some issues with the REST API on DSpace Test
<ul>
<li>He was getting an HTTP 500 when working with a collection, and I see the following in the DSpace log:</li>
</ul>
</li>
</ul>
<pre tabindex="0"><code>2019-03-29 09:10:07,311 ERROR org.dspace.rest.Resource @ Could not delete collection(id=1451), AuthorizeException. Message: org.dspace.authorize.AuthorizeException: Authorization denied for action ADMIN on COLLECTION:1451 by user 9492
</code></pre><ul>
<li>IWMI people emailed to ask why two items with the same DOI don&rsquo;t have the same Altmetric score:
<ul>
<li><a href="https://cgspace.cgiar.org/handle/10568/89846">https://cgspace.cgiar.org/handle/10568/89846</a> (Bioversity)</li>
<li><a href="https://cgspace.cgiar.org/handle/10568/89975">https://cgspace.cgiar.org/handle/10568/89975</a> (CIAT)</li>
</ul>
</li>
<li>Only the second one has an Altmetric score (208)</li>
<li>I tweeted handles for both of them to see if Altmetric will pick it up
<ul>
<li>About twenty minutes later the Altmetric score for the second one had increased from 208 to 209, but the first still had a score of zero</li>
<li>Interestingly, if I look at the network requests during page load for the first item I see the following response payload for the Altmetric API request:</li>
</ul>
</li>
</ul>
<pre tabindex="0"><code>_altmetric.embed_callback({&#34;title&#34;:&#34;Distilling the role of ecosystem services in the Sustainable Development Goals&#34;,&#34;doi&#34;:&#34;10.1016/j.ecoser.2017.10.010&#34;,&#34;tq&#34;:[&#34;Progress on 12 of 17 #SDGs rely on #ecosystemservices - new paper co-authored by a number of&#34;,&#34;Distilling the role of ecosystem services in the Sustainable Development Goals - new paper by @SNAPPartnership researchers&#34;,&#34;How do #ecosystemservices underpin the #SDGs? Our new paper starts counting the ways. Check it out in the link below!&#34;,&#34;Excellent paper about the contribution of #ecosystemservices to SDGs&#34;,&#34;So great to work with amazing collaborators&#34;],&#34;altmetric_jid&#34;:&#34;521611533cf058827c00000a&#34;,&#34;issns&#34;:[&#34;2212-0416&#34;],&#34;journal&#34;:&#34;Ecosystem Services&#34;,&#34;cohorts&#34;:{&#34;sci&#34;:58,&#34;pub&#34;:239,&#34;doc&#34;:3,&#34;com&#34;:2},&#34;context&#34;:{&#34;all&#34;:{&#34;count&#34;:12732768,&#34;mean&#34;:7.8220956572788,&#34;rank&#34;:56146,&#34;pct&#34;:99,&#34;higher_than&#34;:12676701},&#34;journal&#34;:{&#34;count&#34;:549,&#34;mean&#34;:7.7567299270073,&#34;rank&#34;:2,&#34;pct&#34;:99,&#34;higher_than&#34;:547},&#34;similar_age_3m&#34;:{&#34;count&#34;:386919,&#34;mean&#34;:11.573702536454,&#34;rank&#34;:3299,&#34;pct&#34;:99,&#34;higher_than&#34;:383619},&#34;similar_age_journal_3m&#34;:{&#34;count&#34;:28,&#34;mean&#34;:9.5648148148148,&#34;rank&#34;:1,&#34;pct&#34;:96,&#34;higher_than&#34;:27}},&#34;authors&#34;:[&#34;Sylvia L.R. Wood&#34;,&#34;Sarah K. Jones&#34;,&#34;Justin A. Johnson&#34;,&#34;Kate A. Brauman&#34;,&#34;Rebecca Chaplin-Kramer&#34;,&#34;Alexander Fremier&#34;,&#34;Evan Girvetz&#34;,&#34;Line J. Gordon&#34;,&#34;Carrie V. Kappel&#34;,&#34;Lisa Mandle&#34;,&#34;Mark Mulligan&#34;,&#34;Patrick O&#39;Farrell&#34;,&#34;William K. Smith&#34;,&#34;Louise Willemen&#34;,&#34;Wei Zhang&#34;,&#34;Fabrice A. DeClerck&#34;],&#34;type&#34;:&#34;article&#34;,&#34;handles&#34;:[&#34;10568/89975&#34;,&#34;10568/89846&#34;],&#34;handle&#34;:&#34;10568/89975&#34;,&#34;altmetric_id&#34;:29816439,&#34;schema&#34;:&#34;1.5.4&#34;,&#34;is_oa&#34;:false,&#34;cited_by_posts_count&#34;:377,&#34;cited_by_tweeters_count&#34;:302,&#34;cited_by_fbwalls_count&#34;:1,&#34;cited_by_gplus_count&#34;:1,&#34;cited_by_policies_count&#34;:2,&#34;cited_by_accounts_count&#34;:306,&#34;last_updated&#34;:1554039125,&#34;score&#34;:208.65,&#34;history&#34;:{&#34;1y&#34;:54.75,&#34;6m&#34;:10.35,&#34;3m&#34;:5.5,&#34;1m&#34;:5.5,&#34;1w&#34;:1.5,&#34;6d&#34;:1.5,&#34;5d&#34;:1.5,&#34;4d&#34;:1.5,&#34;3d&#34;:1.5,&#34;2d&#34;:1,&#34;1d&#34;:1,&#34;at&#34;:208.65},&#34;url&#34;:&#34;http://dx.doi.org/10.1016/j.ecoser.2017.10.010&#34;,&#34;added_on&#34;:1512153726,&#34;published_on&#34;:1517443200,&#34;readers&#34;:{&#34;citeulike&#34;:0,&#34;mendeley&#34;:248,&#34;connotea&#34;:0},&#34;readers_count&#34;:248,&#34;images&#34;:{&#34;small&#34;:&#34;https://badges.altmetric.com/?size=64&amp;score=209&amp;types=tttttfdg&#34;,&#34;medium&#34;:&#34;https://badges.altmetric.com/?size=100&amp;score=209&amp;types=tttttfdg&#34;,&#34;large&#34;:&#34;https://badges.altmetric.com/?size=180&amp;score=209&amp;types=tttttfdg&#34;},&#34;details_url&#34;:&#34;http://www.altmetric.com/details.php?citation_id=29816439&#34;})
</code></pre><ul>
<li>The response paylod for the second one is the same:</li>
</ul>
<pre tabindex="0"><code>_altmetric.embed_callback({&#34;title&#34;:&#34;Distilling the role of ecosystem services in the Sustainable Development Goals&#34;,&#34;doi&#34;:&#34;10.1016/j.ecoser.2017.10.010&#34;,&#34;tq&#34;:[&#34;Progress on 12 of 17 #SDGs rely on #ecosystemservices - new paper co-authored by a number of&#34;,&#34;Distilling the role of ecosystem services in the Sustainable Development Goals - new paper by @SNAPPartnership researchers&#34;,&#34;How do #ecosystemservices underpin the #SDGs? Our new paper starts counting the ways. Check it out in the link below!&#34;,&#34;Excellent paper about the contribution of #ecosystemservices to SDGs&#34;,&#34;So great to work with amazing collaborators&#34;],&#34;altmetric_jid&#34;:&#34;521611533cf058827c00000a&#34;,&#34;issns&#34;:[&#34;2212-0416&#34;],&#34;journal&#34;:&#34;Ecosystem Services&#34;,&#34;cohorts&#34;:{&#34;sci&#34;:58,&#34;pub&#34;:239,&#34;doc&#34;:3,&#34;com&#34;:2},&#34;context&#34;:{&#34;all&#34;:{&#34;count&#34;:12732768,&#34;mean&#34;:7.8220956572788,&#34;rank&#34;:56146,&#34;pct&#34;:99,&#34;higher_than&#34;:12676701},&#34;journal&#34;:{&#34;count&#34;:549,&#34;mean&#34;:7.7567299270073,&#34;rank&#34;:2,&#34;pct&#34;:99,&#34;higher_than&#34;:547},&#34;similar_age_3m&#34;:{&#34;count&#34;:386919,&#34;mean&#34;:11.573702536454,&#34;rank&#34;:3299,&#34;pct&#34;:99,&#34;higher_than&#34;:383619},&#34;similar_age_journal_3m&#34;:{&#34;count&#34;:28,&#34;mean&#34;:9.5648148148148,&#34;rank&#34;:1,&#34;pct&#34;:96,&#34;higher_than&#34;:27}},&#34;authors&#34;:[&#34;Sylvia L.R. Wood&#34;,&#34;Sarah K. Jones&#34;,&#34;Justin A. Johnson&#34;,&#34;Kate A. Brauman&#34;,&#34;Rebecca Chaplin-Kramer&#34;,&#34;Alexander Fremier&#34;,&#34;Evan Girvetz&#34;,&#34;Line J. Gordon&#34;,&#34;Carrie V. Kappel&#34;,&#34;Lisa Mandle&#34;,&#34;Mark Mulligan&#34;,&#34;Patrick O&#39;Farrell&#34;,&#34;William K. Smith&#34;,&#34;Louise Willemen&#34;,&#34;Wei Zhang&#34;,&#34;Fabrice A. DeClerck&#34;],&#34;type&#34;:&#34;article&#34;,&#34;handles&#34;:[&#34;10568/89975&#34;,&#34;10568/89846&#34;],&#34;handle&#34;:&#34;10568/89975&#34;,&#34;altmetric_id&#34;:29816439,&#34;schema&#34;:&#34;1.5.4&#34;,&#34;is_oa&#34;:false,&#34;cited_by_posts_count&#34;:377,&#34;cited_by_tweeters_count&#34;:302,&#34;cited_by_fbwalls_count&#34;:1,&#34;cited_by_gplus_count&#34;:1,&#34;cited_by_policies_count&#34;:2,&#34;cited_by_accounts_count&#34;:306,&#34;last_updated&#34;:1554039125,&#34;score&#34;:208.65,&#34;history&#34;:{&#34;1y&#34;:54.75,&#34;6m&#34;:10.35,&#34;3m&#34;:5.5,&#34;1m&#34;:5.5,&#34;1w&#34;:1.5,&#34;6d&#34;:1.5,&#34;5d&#34;:1.5,&#34;4d&#34;:1.5,&#34;3d&#34;:1.5,&#34;2d&#34;:1,&#34;1d&#34;:1,&#34;at&#34;:208.65},&#34;url&#34;:&#34;http://dx.doi.org/10.1016/j.ecoser.2017.10.010&#34;,&#34;added_on&#34;:1512153726,&#34;published_on&#34;:1517443200,&#34;readers&#34;:{&#34;citeulike&#34;:0,&#34;mendeley&#34;:248,&#34;connotea&#34;:0},&#34;readers_count&#34;:248,&#34;images&#34;:{&#34;small&#34;:&#34;https://badges.altmetric.com/?size=64&amp;score=209&amp;types=tttttfdg&#34;,&#34;medium&#34;:&#34;https://badges.altmetric.com/?size=100&amp;score=209&amp;types=tttttfdg&#34;,&#34;large&#34;:&#34;https://badges.altmetric.com/?size=180&amp;score=209&amp;types=tttttfdg&#34;},&#34;details_url&#34;:&#34;http://www.altmetric.com/details.php?citation_id=29816439&#34;})
</code></pre><ul>
<li>Very interesting to see this in the response:</li>
</ul>
<pre tabindex="0"><code>&#34;handles&#34;:[&#34;10568/89975&#34;,&#34;10568/89846&#34;],
&#34;handle&#34;:&#34;10568/89975&#34;
</code></pre><ul>
<li>On further inspection I see that the Altmetric explorer pages for each of these Handles is actually doing the right thing:
<ul>
<li><a href="https://www.altmetric.com/explorer/highlights?identifier=10568%2F89846">https://www.altmetric.com/explorer/highlights?identifier=10568%2F89846</a></li>
<li><a href="https://www.altmetric.com/explorer/highlights?identifier=10568%2F89975">https://www.altmetric.com/explorer/highlights?identifier=10568%2F89975</a></li>
</ul>
</li>
<li>So it&rsquo;s likely the DSpace Altmetric badge code that is deciding not to show the badge</li>
</ul>
<!-- raw HTML omitted -->
</article>
</div> <!-- /.blog-main -->
<aside class="col-sm-3 ml-auto blog-sidebar">
<section class="sidebar-module">
<h4>Recent Posts</h4>
<ol class="list-unstyled">
<li><a href="/cgspace-notes/2024-04/">April, 2024</a></li>
<li><a href="/cgspace-notes/2024-03/">March, 2024</a></li>
<li><a href="/cgspace-notes/2024-02/">February, 2024</a></li>
<li><a href="/cgspace-notes/2024-01/">January, 2024</a></li>
<li><a href="/cgspace-notes/2023-12/">December, 2023</a></li>
</ol>
</section>
<section class="sidebar-module">
<h4>Links</h4>
<ol class="list-unstyled">
<li><a href="https://cgspace.cgiar.org">CGSpace</a></li>
<li><a href="https://dspacetest.cgiar.org">DSpace Test</a></li>
<li><a href="https://github.com/ilri/DSpace">CGSpace @ GitHub</a></li>
</ol>
</section>
</aside>
</div> <!-- /.row -->
</div> <!-- /.container -->
<footer class="blog-footer">
<p dir="auto">
Blog template created by <a href="https://twitter.com/mdo">@mdo</a>, ported to Hugo by <a href='https://twitter.com/mralanorth'>@mralanorth</a>.
</p>
<p>
<a href="#">Back to top</a>
</p>
</footer>
</body>
</html>