cgspace-notes/docs/2018-09/index.html

803 lines
53 KiB
HTML
Raw Normal View History

2018-09-02 11:03:43 +02:00
<!DOCTYPE html>
<html lang="en" >
2018-09-02 11:03:43 +02:00
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
2020-12-06 15:53:29 +01:00
2018-09-02 11:03:43 +02:00
<meta property="og:title" content="September, 2018" />
<meta property="og:description" content="2018-09-02
New PostgreSQL JDBC driver version 42.2.5
2020-01-27 15:20:44 +01:00
I&rsquo;ll update the DSpace role in our Ansible infrastructure playbooks and run the updated playbooks on CGSpace and DSpace Test
Also, I&rsquo;ll re-run the postgresql tasks because the custom PostgreSQL variables are dynamic according to the system&rsquo;s RAM, and we never re-ran them after migrating to larger Linodes last month
I&rsquo;m testing the new DSpace 5.8 branch in my Ubuntu 18.04 environment and I&rsquo;m getting those autowire errors in Tomcat 8.5.30 again:
2018-09-02 11:03:43 +02:00
" />
<meta property="og:type" content="article" />
2019-02-02 13:12:57 +01:00
<meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2018-09/" />
2019-08-08 17:10:44 +02:00
<meta property="article:published_time" content="2018-09-02T09:55:54+03:00" />
2020-04-13 16:24:05 +02:00
<meta property="article:modified_time" content="2020-04-13T15:30:24+03:00" />
2018-09-30 07:23:48 +02:00
2020-12-06 15:53:29 +01:00
2018-09-02 11:03:43 +02:00
<meta name="twitter:card" content="summary"/>
<meta name="twitter:title" content="September, 2018"/>
<meta name="twitter:description" content="2018-09-02
New PostgreSQL JDBC driver version 42.2.5
2020-01-27 15:20:44 +01:00
I&rsquo;ll update the DSpace role in our Ansible infrastructure playbooks and run the updated playbooks on CGSpace and DSpace Test
Also, I&rsquo;ll re-run the postgresql tasks because the custom PostgreSQL variables are dynamic according to the system&rsquo;s RAM, and we never re-ran them after migrating to larger Linodes last month
I&rsquo;m testing the new DSpace 5.8 branch in my Ubuntu 18.04 environment and I&rsquo;m getting those autowire errors in Tomcat 8.5.30 again:
2018-09-02 11:03:43 +02:00
"/>
2021-11-22 15:47:50 +01:00
<meta name="generator" content="Hugo 0.89.4" />
2018-09-02 11:03:43 +02:00
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "BlogPosting",
"headline": "September, 2018",
2020-04-02 09:55:42 +02:00
"url": "https://alanorth.github.io/cgspace-notes/2018-09/",
2020-01-21 09:24:39 +01:00
"wordCount": "5246",
"datePublished": "2018-09-02T09:55:54+03:00",
2020-04-13 16:24:05 +02:00
"dateModified": "2020-04-13T15:30:24+03:00",
2018-09-02 11:03:43 +02:00
"author": {
"@type": "Person",
"name": "Alan Orth"
},
"keywords": "Notes"
}
</script>
<link rel="canonical" href="https://alanorth.github.io/cgspace-notes/2018-09/">
<title>September, 2018 | CGSpace Notes</title>
2018-09-02 11:03:43 +02:00
<!-- combined, minified CSS -->
2020-01-23 19:19:38 +01:00
2021-01-24 08:46:27 +01:00
<link href="https://alanorth.github.io/cgspace-notes/css/style.beb8012edc08ba10be012f079d618dc243812267efe62e11f22fe49618f976a4.css" rel="stylesheet" integrity="sha256-vrgBLtwIuhC&#43;AS8HnWGNwkOBImfv5i4R8i/klhj5dqQ=" crossorigin="anonymous">
2018-09-02 11:03:43 +02:00
2020-01-28 11:01:42 +01:00
<!-- minified Font Awesome for SVG icons -->
2021-09-28 09:32:32 +02:00
<script defer src="https://alanorth.github.io/cgspace-notes/js/fontawesome.min.f5072c55a0721857184db93a50561d7dc13975b4de2e19db7f81eb5f3fa57270.js" integrity="sha256-9QcsVaByGFcYTbk6UFYdfcE5dbTeLhnbf4HrXz&#43;lcnA=" crossorigin="anonymous"></script>
2020-01-28 11:01:42 +01:00
2019-04-14 15:59:47 +02:00
<!-- RSS 2.0 feed -->
2018-09-02 11:03:43 +02:00
</head>
<body>
<div class="blog-masthead">
<div class="container">
<nav class="nav blog-nav">
<a class="nav-link " href="https://alanorth.github.io/cgspace-notes/">Home</a>
</nav>
</div>
</div>
2018-12-19 12:20:39 +01:00
2018-09-02 11:03:43 +02:00
<header class="blog-header">
<div class="container">
<h1 class="blog-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/" rel="home">CGSpace Notes</a></h1>
<p class="lead blog-description" dir="auto">Documenting day-to-day work on the <a href="https://cgspace.cgiar.org">CGSpace</a> repository.</p>
2018-09-02 11:03:43 +02:00
</div>
</header>
2018-12-19 12:20:39 +01:00
2018-09-02 11:03:43 +02:00
<div class="container">
<div class="row">
<div class="col-sm-8 blog-main">
<article class="blog-post">
<header>
<h2 class="blog-post-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/2018-09/">September, 2018</a></h2>
2020-11-16 09:54:00 +01:00
<p class="blog-post-meta">
<time datetime="2018-09-02T09:55:54+03:00">Sun Sep 02, 2018</time>
in
2020-01-28 11:01:42 +01:00
<span class="fas fa-folder" aria-hidden="true"></span>&nbsp;<a href="/cgspace-notes/categories/notes/" rel="category tag">Notes</a>
2018-09-02 11:03:43 +02:00
</p>
</header>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-02">2018-09-02</h2>
2018-09-02 11:03:43 +02:00
<ul>
<li>New <a href="https://jdbc.postgresql.org/documentation/changelog.html#version_42.2.5">PostgreSQL JDBC driver version 42.2.5</a></li>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;ll update the DSpace role in our <a href="https://github.com/ilri/rmg-ansible-public">Ansible infrastructure playbooks</a> and run the updated playbooks on CGSpace and DSpace Test</li>
<li>Also, I&rsquo;ll re-run the <code>postgresql</code> tasks because the custom PostgreSQL variables are dynamic according to the system&rsquo;s RAM, and we never re-ran them after migrating to larger Linodes last month</li>
<li>I&rsquo;m testing the new DSpace 5.8 branch in my Ubuntu 18.04 environment and I&rsquo;m getting those autowire errors in Tomcat 8.5.30 again:</li>
2018-09-02 11:03:43 +02:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>02-Sep-2018 11:18:52.678 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.listenerStart Exception sending context initialized event to listener instance of class [org.dspace.servicemanager.servlet.DSpaceKernelServletContextListener]
2018-09-02 11:03:43 +02:00
java.lang.RuntimeException: Failure during filter init: Failed to startup the DSpace Service Manager: failure starting up spring service manager: Error creating bean with name 'conversionService' defined in file [/home/dspacetest.cgiar.org/config/spring/xmlui/spring-dspace-addon-cua-services.xml]: Cannot create inner bean 'com.atmire.app.xmlui.aspect.statistics.mostpopular.MostPopularConfig$ColumnsConverter#4c5d5a2' of type [com.atmire.app.xmlui.aspect.statistics.mostpopular.MostPopularConfig$ColumnsConverter] while setting bean property 'converters' with key [1]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.atmire.app.xmlui.aspect.statistics.mostpopular.MostPopularConfig$ColumnsConverter#4c5d5a2': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private com.atmire.app.xmlui.aspect.statistics.mostpopular.MostPopularConfig$FilterConverter com.atmire.app.xmlui.aspect.statistics.mostpopular.MostPopularConfig$ColumnsConverter.filterConverter; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No matching bean of type [com.atmire.app.xmlui.aspect.statistics.mostpopular.MostPopularConfig$FilterConverter] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true)}
at org.dspace.servicemanager.servlet.DSpaceKernelServletContextListener.contextInitialized(DSpaceKernelServletContextListener.java:92)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4776)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5240)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:754)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:730)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:629)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1838)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Failed to startup the DSpace Service Manager: failure starting up spring service manager: Error creating bean with name 'conversionService' defined in file [/home/dspacetest.cgiar.org/config/spring/xmlui/spring-dspace-addon-cua-services.xml]: Cannot create inner bean 'com.atmire.app.xmlui.aspect.statistics.mostpopular.MostPopularConfig$ColumnsConverter#4c5d5a2' of type [com.atmire.app.xmlui.aspect.statistics.mostpopular.MostPopularConfig$ColumnsConverter] while setting bean property 'converters' with key [1]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.atmire.app.xmlui.aspect.statistics.mostpopular.MostPopularConfig$ColumnsConverter#4c5d5a2': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private com.atmire.app.xmlui.aspect.statistics.mostpopular.MostPopularConfig$FilterConverter com.atmire.app.xmlui.aspect.statistics.mostpopular.MostPopularConfig$ColumnsConverter.filterConverter; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No matching bean of type [com.atmire.app.xmlui.aspect.statistics.mostpopular.MostPopularConfig$FilterConverter] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations:
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2018-09-02 11:03:43 +02:00
<li>Full log here: <a href="https://gist.github.com/alanorth/1e4ae567b853fea9d9dbf1a030ecd8c2">https://gist.github.com/alanorth/1e4ae567b853fea9d9dbf1a030ecd8c2</a></li>
2018-09-02 11:07:04 +02:00
<li>XMLUI fails to load, but the REST, SOLR, JSPUI, etc work</li>
2018-09-02 11:03:43 +02:00
<li>The old <code>5_x-prod-dspace-5.5</code> branch does work in Ubuntu 18.04 with Tomcat 8.5.30-1ubuntu1.4, however!</li>
<li>And the <code>5_x-prod</code> DSpace 5.8 branch does work in Tomcat 8.5.x on my Arch Linux laptop&hellip;</li>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;m not sure where the issue is then!</li>
2018-09-02 11:03:43 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-03">2018-09-03</h2>
2018-09-03 15:47:24 +02:00
<ul>
2020-01-27 15:20:44 +01:00
<li>Abenet says she&rsquo;s getting three emails about periodic statistics reports every day since the DSpace 5.8 upgrade last week</li>
2018-09-03 15:47:24 +02:00
<li>They are from the CUA module</li>
<li>Two of them have &ldquo;no data&rdquo; and one has a &ldquo;null&rdquo; title</li>
<li>The last one is a report of the top downloaded items, and includes a graph</li>
<li>She will try to click the &ldquo;Unsubscribe&rdquo; link in the first two to see if it works, otherwise we should contact Atmire</li>
<li>The only one she remembers subscribing to is the top downloads one</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-04">2018-09-04</h2>
2018-09-04 12:25:13 +02:00
<ul>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;m looking over the latest round of IITA records from Sisay: <a href="https://dspacetest.cgiar.org/handle/10568/104230">Mercy1806_August_29</a>
2018-09-04 12:25:13 +02:00
<ul>
<li>All fields are split with multiple columns like <code>cg.authorship.types</code> and <code>cg.authorship.types[]</code></li>
<li>This makes it super annoying to do the checks and cleanup, so I will merge them (also time consuming)</li>
2018-09-04 16:08:34 +02:00
<li>Five items had <code>dc.date.issued</code> values like <code>2013-5</code> so I corrected them to be <code>2013-05</code></li>
2018-09-04 12:25:13 +02:00
<li>Several metadata fields had values with newlines in them (even in some titles!), which I fixed by trimming the consecutive whitespaces in Open Refine</li>
2020-01-27 15:20:44 +01:00
<li>Many (91!) items from before 2011 are indicated as having a CRP, but CRPs didn&rsquo;t exist then so this is impossible
2019-11-28 16:30:45 +01:00
<ul>
2018-09-04 16:08:34 +02:00
<li>I got all items that were from 2011 and onwards using a custom facet with this GREL on the <code>dc.date.issued</code> column: <code>isNotNull(value.match(/201[1-8].*/))</code> and then blanking their CRPs</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
2018-09-04 16:08:34 +02:00
<li>Some affiliations with only one separator (|) for multiple values</li>
<li>I replaced smart quotes like <code></code> with plain ones</li>
2018-09-04 16:31:20 +02:00
<li>Some inconsistencies in <code>cg.subject.iita</code> like COWPEA and COWPEAS, and YAM and YAMS, etc, as well as some spelling mistakes like IMPACT ASSESSMENTN</li>
2018-09-04 16:08:34 +02:00
<li>Some values in the <code>dc.identifier.isbn</code> are actually ISSNs so I moved them to the <code>dc.identifier.issn</code> column</li>
<li>I found one invalid ISSN using a custom text facet with the regex from the <a href="https://en.wikipedia.org/wiki/International_Standard_Serial_Number#Code_format">ISSN page on Wikipedia</a>: <code>isNotBlank(value.match(/^\d{4}-\d{3}[\dxX]$/))</code></li>
<li>One invalid value for <code>dc.type</code></li>
2018-09-04 12:25:13 +02:00
</ul>
2019-11-28 16:30:45 +01:00
</li>
2020-01-27 15:20:44 +01:00
<li>Abenet says she hasn&rsquo;t received any more subscription emails from the CUA module since she unsubscribed yesterday, so I think we don&rsquo;t need create an issue on Atmire&rsquo;s bug tracker anymore</li>
2019-11-28 16:30:45 +01:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-10">2018-09-10</h2>
2018-09-10 10:59:08 +02:00
<ul>
<li>Playing with <a href="https://github.com/eykhagen/strest">strest</a> to test the DSpace REST API programatically</li>
2019-11-28 16:30:45 +01:00
<li>For example, given this <code>test.yaml</code>:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>version: 1
2018-09-10 10:59:08 +02:00
requests:
2019-11-28 16:30:45 +01:00
test:
method: GET
url: https://dspacetest.cgiar.org/rest/test
validate:
raw: &quot;REST api is running.&quot;
login:
url: https://dspacetest.cgiar.org/rest/login
method: POST
data:
json: {&quot;email&quot;:&quot;test@dspace&quot;,&quot;password&quot;:&quot;thepass&quot;}
status:
url: https://dspacetest.cgiar.org/rest/status
method: GET
headers:
rest-dspace-token: Value(login)
logout:
url: https://dspacetest.cgiar.org/rest/logout
method: POST
headers:
rest-dspace-token: Value(login)
2018-09-10 10:59:08 +02:00
# vim: set sw=2 ts=2:
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Works pretty well, though the DSpace <code>logout</code> always returns an HTTP 415 error for some reason</li>
<li>We could eventually use this to test sanity of the API for creating collections etc</li>
<li>A user is getting an error in her workflow:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>2018-09-10 07:26:35,551 ERROR org.dspace.submit.step.CompleteStep @ Caught exception in submission step:
2018-09-10 10:59:08 +02:00
org.dspace.authorize.AuthorizeException: Authorization denied for action WORKFLOW_STEP_1 on COLLECTION:2 by user 3819
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>Seems to be during submit step, because it&rsquo;s workflow step 1&hellip;?</li>
2019-11-28 16:30:45 +01:00
<li>Move some top-level CRP communities to be below the new <a href="https://cgspace.cgiar.org/handle/10568/97114">CGIAR Research Programs and Platforms</a> community:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ dspace community-filiator --set -p 10568/97114 -c 10568/51670
2018-09-10 17:19:00 +02:00
$ dspace community-filiator --set -p 10568/97114 -c 10568/35409
$ dspace community-filiator --set -p 10568/97114 -c 10568/3112
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Valerio contacted me to point out some issues with metadata on CGSpace, which I corrected in PostgreSQL:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>update metadatavalue set text_value='ISI Journal' where resource_type_id=2 and metadata_field_id=226 and text_value='ISI Juornal';
2018-09-10 17:19:00 +02:00
UPDATE 1
update metadatavalue set text_value='ISI Journal' where resource_type_id=2 and metadata_field_id=226 and text_value='ISI journal';
UPDATE 23
update metadatavalue set text_value='ISI Journal' where resource_type_id=2 and metadata_field_id=226 and text_value='YES';
UPDATE 1
delete from metadatavalue where resource_type_id=2 and metadata_field_id=226 and text_value='NO';
DELETE 17
update metadatavalue set text_value='ISI Journal' where resource_type_id=2 and metadata_field_id=226 and text_value='ISI';
UPDATE 15
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Start working on adding metadata for access and usage rights that we started earlier in 2018 (and also in 2017)</li>
<li>The current <code>cg.identifier.status</code> field will become &ldquo;Access rights&rdquo; and <code>dc.rights</code> will become &ldquo;Usage rights&rdquo;</li>
<li>I have some work in progress on the <a href="https://github.com/alanorth/DSpace/tree/5_x-rights"><code>5_x-rights</code> branch</a></li>
<li>Linode said that CGSpace (linode18) had a high CPU load earlier today</li>
2020-01-27 15:20:44 +01:00
<li>When I looked, I see it&rsquo;s the same Russian IP that I noticed last month:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E &quot;10/Sep/2018&quot; | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
1459 157.55.39.202
1579 95.108.181.88
1615 157.55.39.147
1714 66.249.64.91
1924 50.116.102.77
3696 157.55.39.106
3763 157.55.39.148
4470 70.32.83.92
4724 35.237.175.180
14132 5.9.6.51
</code></pre><ul>
<li>And this bot is still creating more Tomcat sessions than Nginx requests (WTF?):</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># grep -c -E 'session_id=[A-Z0-9]{32}:ip_addr=5.9.6.51' dspace.log.2018-09-10
2018-09-10 23:37:38 +02:00
14133
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>The user agent is still the same:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Mozilla/5.0 (compatible; MegaIndex.ru/2.0; +http://megaindex.com/crawler)
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I added <code>.*crawl.*</code> to the Tomcat Session Crawler Manager Valve, so I&rsquo;m not sure why the bot is creating so many sessions&hellip;</li>
2019-11-28 16:30:45 +01:00
<li>I just tested that user agent on CGSpace and it <em>does not</em> create a new session:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ http --print Hh https://cgspace.cgiar.org 'User-Agent:Mozilla/5.0 (compatible; MegaIndex.ru/2.0; +http://megaindex.com/crawler)'
2018-09-10 23:37:38 +02:00
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: cgspace.cgiar.org
User-Agent: Mozilla/5.0 (compatible; MegaIndex.ru/2.0; +http://megaindex.com/crawler)
HTTP/1.1 200 OK
Connection: keep-alive
Content-Encoding: gzip
Content-Language: en-US
Content-Type: text/html;charset=utf-8
Date: Mon, 10 Sep 2018 20:43:04 GMT
Server: nginx
Strict-Transport-Security: max-age=15768000
Transfer-Encoding: chunked
Vary: Accept-Encoding
X-Cocoon-Version: 2.2.0
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I will have to keep an eye on it and perhaps add it to the list of &ldquo;bad bots&rdquo; that get rate limited</li>
2018-09-10 10:59:08 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-12">2018-09-12</h2>
2018-09-12 12:49:33 +02:00
<ul>
<li>Merge AReS explorer changes to nginx config and deploy on CGSpace so CodeObia can start testing more</li>
2019-11-28 16:30:45 +01:00
<li>Re-create my local Docker container for PostgreSQL data, but using a volume for the database data:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ sudo docker volume create --name dspacetest_data
2018-09-12 12:49:33 +02:00
$ sudo docker run --name dspacedb -v dspacetest_data:/var/lib/postgresql/data -e POSTGRES_PASSWORD=postgres -p 5432:5432 -d postgres:9.6-alpine
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Sisay is still having problems with the controlled vocabulary for top authors</li>
<li>I took a look at the submission template and Firefox complains that the XML file is missing a root element</li>
2020-01-27 15:20:44 +01:00
<li>I guess it&rsquo;s because Firefox is receiving an empty XML file</li>
2019-11-28 16:30:45 +01:00
<li>I told Sisay to run the XML file through tidy</li>
<li>More testing of the access and usage rights changes</li>
2018-09-12 16:02:14 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-13">2018-09-13</h2>
2018-09-13 11:48:20 +02:00
<ul>
2019-11-28 16:30:45 +01:00
<li>Peter was communicating with Altmetric about the OAI mapping issue for item <a href="https://cgspace.cgiar.org/oai/request?verb=GetRecord&amp;metadataPrefix=oai_dc&amp;identifier=oai:cgspace.cgiar.org:10568/82810">10568/82810</a> again</li>
2020-01-27 15:20:44 +01:00
<li>Altmetric said it was somehow related to the OAI <code>dateStamp</code> not getting updated when the mappings changed, but I said that back in <a href="/cgspace-notes/2018-07/">2018-07</a> when this happened it was because the OAI was actually just not reflecting all the item&rsquo;s mappings</li>
2018-09-13 11:48:20 +02:00
<li>After forcing a complete re-indexing of OAI the mappings were fine</li>
2020-01-27 15:20:44 +01:00
<li>The <code>dateStamp</code> is most probably only updated when the item&rsquo;s metadata changes, not its mappings, so if Altmetric is relying on that we&rsquo;re in a tricky spot</li>
<li>We need to make sure that our OAI isn&rsquo;t publicizing stale data&hellip; I was going to post something on the dspace-tech mailing list, but never did</li>
2018-09-13 15:15:01 +02:00
<li>Linode says that CGSpace (linode18) has had high CPU for the past two hours</li>
2019-11-28 16:30:45 +01:00
<li>The top IP addresses today are:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/rest.log /var/log/nginx/rest.log.1 | grep -E &quot;13/Sep/2018&quot; | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
32 46.229.161.131
38 104.198.9.108
39 66.249.64.91
56 157.55.39.224
57 207.46.13.49
58 40.77.167.120
78 169.255.105.46
702 54.214.112.202
1840 50.116.102.77
4469 70.32.83.92
</code></pre><ul>
<li>And the top two addresses seem to be re-using their Tomcat sessions properly:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ grep -c -E 'session_id=[A-Z0-9]{32}:ip_addr=70.32.83.92' dspace.log.2018-09-13 | sort | uniq
2018-09-13 15:15:01 +02:00
7
$ grep -c -E 'session_id=[A-Z0-9]{32}:ip_addr=50.116.102.77' dspace.log.2018-09-13 | sort | uniq
2
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>So I&rsquo;m not sure what&rsquo;s going on</li>
<li>Valerio asked me if there&rsquo;s a way to get the page views and downloads from CGSpace</li>
2019-11-28 16:30:45 +01:00
<li>I said no, but that we might be able to piggyback on the Atmire statlet REST API</li>
<li>For example, when you expand the &ldquo;statlet&rdquo; at the bottom of an item like <a href="https://cgspace.cgiar.org/handle/10568/97103">10568/97103</a> you can see the following request in the browser console:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>https://cgspace.cgiar.org/rest/statlets?handle=10568/97103
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>That JSON file has the total page views and item downloads for the item&hellip;</li>
<li>Abenet forwarded a request by CIP that item thumbnails be included in RSS feeds</li>
2020-01-27 15:20:44 +01:00
<li>I had a quick look at the DSpace 5.x manual and it doesn&rsquo;t not seem that this is possible (you can only add metadata)</li>
<li>Testing the new LDAP server the CGNET says will be replacing the old one, it doesn&rsquo;t seem that they are using the global catalog on port 3269 anymore, now only 636 is open</li>
2019-11-28 16:30:45 +01:00
<li>I did a clean deploy of DSpace 5.8 on Ubuntu 18.04 with some stripped down Tomcat 8 configuration and actually managed to get it up and running without the autowire errors that I had previously experienced</li>
<li>I realized that it always works on my local machine with Tomcat 8.5.x, but not when I do the deployment from Ansible in Ubuntu 18.04</li>
<li>So there must be something in my Tomcat 8 <code>server.xml</code> template</li>
2020-01-27 15:20:44 +01:00
<li>Now I re-deployed it with the normal server template and it&rsquo;s working, WTF?</li>
2019-11-28 16:30:45 +01:00
<li>Must have been something like an old DSpace 5.5 file in the spring folder&hellip; weird</li>
<li>But yay, this means we can update DSpace Test to Ubuntu 18.04, Tomcat 8, PostgreSQL 9.6, etc&hellip;</li>
2018-09-13 15:15:01 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-14">2018-09-14</h2>
2018-09-14 11:30:17 +02:00
<ul>
<li>Sisay uploaded the IITA records to CGSpace, but forgot to remove the old Handles</li>
<li>I explicitly told him not to forget to remove them yesterday!</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-16">2018-09-16</h2>
2018-09-16 15:30:32 +02:00
<ul>
<li>Add the DSpace build.properties as a template into my <a href="https://github.com/ilri/rmg-ansible-public">Ansible infrastructure scripts</a> for configuring DSpace machines</li>
2020-01-27 15:20:44 +01:00
<li>One stupid thing there is that I add all the variables in a private vars file, which is apparently higher precedence than host vars, meaning that I can&rsquo;t override them (like SMTP server) on a per-host basis</li>
2018-09-16 15:30:32 +02:00
<li>Discuss access and usage rights with Peter</li>
<li>I suggested that we leave access rights (<code>cg.identifier.access</code>) as it is now, with &ldquo;Open Access&rdquo; or &ldquo;Limited Access&rdquo;, and then simply re-brand that as &ldquo;Access rights&rdquo; in the UIs and relevant drop downs</li>
<li>Then we continue as planned to add <code>dc.rights</code> as &ldquo;Usage rights&rdquo;</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-17">2018-09-17</h2>
2018-09-17 16:34:48 +02:00
<ul>
<li>Skype meeting with CGSpace team in Addis</li>
<li>Change <code>cg.identifier.status</code> &ldquo;Access rights&rdquo; options to:
<ul>
<li>Open Access→Unrestricted Access</li>
<li>Limited Access→Restricted Access</li>
<li>Metadata Only</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
2018-09-17 16:34:48 +02:00
<li>Update these immediately, but talk to CodeObia to create a mapping between the old and new values</li>
<li>Finalize <code>dc.rights</code> &ldquo;Usage rights&rdquo; with seven combinations of Creative Commons, plus the others</li>
2020-01-27 15:20:44 +01:00
<li>Need to double check the new <a href="https://cgspace.cgiar.org/handle/10568/97114">CRP community</a> to see why the collection counts aren&rsquo;t updated after we moved the communities there last week
2018-09-17 16:34:48 +02:00
<ul>
2018-09-17 18:53:08 +02:00
<li>I forced a full Discovery re-index and now the community shows 1,600 items</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
2020-01-27 15:20:44 +01:00
<li>Check if it&rsquo;s possible to have items deposited via REST use a workflow so we can perhaps tell ICARDA to use that from MEL</li>
<li>Agree that we&rsquo;ll publicize AReS explorer on the week before the Big Data Platform workshop
2018-09-17 16:34:48 +02:00
<ul>
<li>Put a link and or picture on the CGSpace homepage saying &ldquo;Visualized CGSpace research&rdquo; or something, and post a message on Yammer</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
2018-09-18 00:16:21 +02:00
<li>I want to explore creating a thin API to make the item view and download stats available from Solr so CodeObia can use them in the AReS explorer</li>
2020-01-27 15:20:44 +01:00
<li>Currently CodeObia is exploring using the Atmire statlets internal API, but I don&rsquo;t really like that&hellip;</li>
2020-04-13 16:24:05 +02:00
<li>There are some example queries on the <a href="https://wiki.lyrasis.org/display/DSPACE/Solr">DSpace Solr wiki</a></li>
2019-11-28 16:30:45 +01:00
<li>For example, this query returns 1655 rows for item <a href="https://cgspace.cgiar.org/handle/10568/10630">10568/10630</a>:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ http 'http://localhost:3000/solr/statistics/select?indent=on&amp;rows=0&amp;q=type:0+owningItem:11576&amp;fq=isBot:false'
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>The id in the Solr query is the item&rsquo;s database id (get it from the REST API or something)</li>
<li>Next, I adopted a query to get the downloads and it shows 889, which is similar to the number Atmire&rsquo;s statlet shows, though the query logic here is confusing:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ http 'http://localhost:3000/solr/statistics/select?indent=on&amp;rows=0&amp;q=type:0+owningItem:11576&amp;fq=isBot:false&amp;fq=-(bundleName:[*+TO+*]-bundleName:ORIGINAL)&amp;fq=-(statistics_type:[*+TO+*]+-statistics_type:view)'
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>According to the <a href="https://wiki.apache.org/solr/SolrQuerySyntax">SolrQuerySyntax</a> page on the Apache wiki, the <code>[* TO *]</code> syntax just selects a range (in this case all values for a field)</li>
<li>So it seems to be:
2018-09-18 00:16:21 +02:00
<ul>
<li><code>type:0</code> is for bitstreams according to the DSpace Solr documentation</li>
<li><code>-(bundleName:[*+TO+*]-bundleName:ORIGINAL)</code> seems to be a <a href="https://wiki.apache.org/solr/NegativeQueryProblems">negative query starting with all documents</a>, subtracting those with <code>bundleName:ORIGINAL</code>, and then negating the whole thing&hellip; meaning only documents from <code>bundleName:ORIGINAL</code>?</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
2020-01-27 15:20:44 +01:00
<li>What the shit, I think I&rsquo;m right: the simplified logic in <em>this</em> query returns the same 889:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ http 'http://localhost:3000/solr/statistics/select?indent=on&amp;rows=0&amp;q=type:0+owningItem:11576&amp;fq=isBot:false&amp;fq=bundleName:ORIGINAL&amp;fq=-(statistics_type:[*+TO+*]+-statistics_type:view)'
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>And if I simplify the <code>statistics_type</code> logic the same way, it still returns the same 889!</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ http 'http://localhost:3000/solr/statistics/select?indent=on&amp;rows=0&amp;q=type:0+owningItem:11576&amp;fq=isBot:false&amp;fq=bundleName:ORIGINAL&amp;fq=statistics_type:view'
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>As for item views, I suppose that&rsquo;s just the same query, minus the <code>bundleName:ORIGINAL</code>:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ http 'http://localhost:3000/solr/statistics/select?indent=on&amp;rows=0&amp;q=type:0+owningItem:11576&amp;fq=isBot:false&amp;fq=-bundleName:ORIGINAL&amp;fq=statistics_type:view'
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>That one returns 766, which is exactly 1655 minus 889&hellip;</li>
2020-01-27 15:20:44 +01:00
<li>Also, Solr&rsquo;s <code>fq</code> is similar to the regular <code>q</code> query parameter, but it is considered for the Solr query cache so it should be faster for multiple queries</li>
2018-09-17 16:34:48 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-18">2018-09-18</h2>
2018-09-18 14:52:20 +02:00
<ul>
<li>I managed to create a simple proof of concept REST API to expose item view and download statistics: <a href="https://github.com/alanorth/cgspace-statistics-api">cgspace-statistics-api</a></li>
<li>It uses the Python-based <a href="https://falcon.readthedocs.io">Falcon</a> web framework and talks to Solr directly using the <a href="https://github.com/moonlitesolutions/SolrClient">SolrClient</a> library (which seems to have issues in Python 3.7 currently)</li>
2019-11-28 16:30:45 +01:00
<li>After deploying on DSpace Test I can then get the stats for an item using its ID:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ http -b 'https://dspacetest.cgiar.org/rest/statistics/item?id=110988'
2018-09-18 14:52:20 +02:00
{
2019-11-28 16:30:45 +01:00
&quot;downloads&quot;: 2,
&quot;id&quot;: 110988,
&quot;views&quot;: 15
2018-09-18 14:52:20 +02:00
}
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>The numbers are different than those that come from Atmire&rsquo;s statlets for some reason, but as I&rsquo;m querying Solr directly, I have no idea where their numbers come from!</li>
2019-11-28 16:30:45 +01:00
<li>Moayad from CodeObia asked if I could make the API be able to paginate over all items, for example: /statistics?limit=100&amp;page=1</li>
<li>Getting all the item IDs from PostgreSQL is certainly easy:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>dspace=# select item_id from item where in_archive is True and withdrawn is False and discoverable is True;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>The rest of the Falcon tooling will be more difficult&hellip;</li>
2018-09-18 14:52:20 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-19">2018-09-19</h2>
2018-09-19 19:40:18 +02:00
<ul>
<li>I emailed Jane Poole to ask if there is some money we can use from the Big Data Platform (BDP) to fund the purchase of some Atmire credits for CGSpace</li>
2020-01-27 15:20:44 +01:00
<li>I learned that there is an efficient way to do <a href="http://yonik.com/solr/paging-and-deep-paging/">&ldquo;deep paging&rdquo; in large Solr results sets by using <code>cursorMark</code></a>, but it doesn&rsquo;t work with faceting</li>
2018-09-19 19:40:18 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-20">2018-09-20</h2>
2018-09-20 12:11:48 +02:00
<ul>
2020-01-21 09:24:39 +01:00
<li>Contact Atmire to ask how we can buy more credits for future development (<a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=644">#644</a>)</li>
2019-11-28 16:30:45 +01:00
<li>I researched the Solr <code>filterCache</code> size and I found out that the formula for calculating the potential memory use of <strong>each entry</strong> in the cache is:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>((maxDoc/8) + 128) * (size_defined_in_solrconfig.xml)
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Which means that, for our statistics core with <em>149 million</em> documents, each entry in our <code>filterCache</code> would use 8.9 GB!</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>((149374568/8) + 128) * 512 = 9560037888 bytes (8.9 GB)
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>So I think we can forget about tuning this for now!</li>
<li><a href="http://lucene.472066.n3.nabble.com/Calculating-filterCache-size-td4142526.html">Discussion on the mailing list about <code>filterCache</code> size</a></li>
<li><a href="https://docs.google.com/document/d/1vl-nmlprSULvNZKQNrqp65eLnLhG9s_ydXQtg9iML10/edit">Article discussing testing methodology for different <code>filterCache</code> sizes</a></li>
<li>Discuss Handle links on Twitter with IWMI</li>
2018-09-21 23:49:53 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-21">2018-09-21</h2>
2018-09-21 23:49:53 +02:00
<ul>
<li>I see that there was a nice optimization to the ImageMagick PDF CMYK detection in the upstream <code>dspace-5_x</code> branch: <a href="https://github.com/DSpace/DSpace/pull/2204">DS-3664</a></li>
2020-01-27 15:20:44 +01:00
<li>The fix will go into DSpace 5.10, and we are currently on DSpace 5.8 but I think I&rsquo;ll cherry-pick that fix into our <code>5_x-prod</code> branch:
2018-09-21 23:49:53 +02:00
<ul>
<li>4e8c7b578bdbe26ead07e36055de6896bbf02f83: ImageMagick: Only execute &ldquo;identify&rdquo; on first page</li>
2019-11-28 16:30:45 +01:00
</ul>
</li>
2018-09-21 23:49:53 +02:00
<li>I think it would also be nice to cherry-pick the fixes for <a href="https://github.com/DSpace/DSpace/pull/2020">DS-3883</a>, which is related to optimizing the XMLUI item display of items with many bitstreams
<ul>
2020-01-27 15:20:44 +01:00
<li>a0ea20bd1821720b111e2873b08e03ce2bf93307: DS-3883: Don&rsquo;t loop through original bitstreams if only displaying thumbnails</li>
2018-09-21 23:49:53 +02:00
<li>8d81e825dee62c2aa9d403a505e4a4d798964e8d: DS-3883: If only including thumbnails, only load the main item thumbnail.</li>
2018-09-20 12:11:48 +02:00
</ul>
2019-11-28 16:30:45 +01:00
</li>
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2019-09-23">2019-09-23</h2>
2018-09-23 15:51:17 +02:00
<ul>
2020-01-27 15:20:44 +01:00
<li>I did more work on my <a href="https://github.com/alanorth/cgspace-statistics-api">cgspace-statistics-api</a>, fixing some item view counts and adding indexing via SQLite (I&rsquo;m trying to avoid having to set up <em>yet another</em> database, user, password, etc) during deployment</li>
2018-09-23 15:51:17 +02:00
<li>I created a new branch called <code>5_x-upstream-cherry-picks</code> to test and track those cherry-picks from the upstream 5.x branch</li>
<li>Also, I need to test the new LDAP server, so I will deploy that on DSpace Test today</li>
2018-09-23 23:31:59 +02:00
<li>Rename my cgspace-statistics-api to <a href="https://github.com/alanorth/dspace-statistics-api">dspace-statistics-api</a> on GitHub</li>
2018-09-23 15:51:17 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-24">2018-09-24</h2>
2018-09-24 12:40:59 +02:00
<ul>
<li>Trying to figure out how to get item views and downloads from SQLite in a join</li>
2020-01-27 15:20:44 +01:00
<li>It appears SQLite doesn&rsquo;t support <code>FULL OUTER JOIN</code> so some people on StackOverflow have emulated it with <code>LEFT JOIN</code> and <code>UNION</code>:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>&gt; SELECT views.views, views.id, downloads.downloads, downloads.id FROM itemviews views
2018-09-24 12:40:59 +02:00
LEFT JOIN itemdownloads downloads USING(id)
UNION ALL
SELECT views.views, views.id, downloads.downloads, downloads.id FROM itemdownloads downloads
LEFT JOIN itemviews views USING(id)
WHERE views.id IS NULL;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>This &ldquo;works&rdquo; but the resulting rows are kinda messy so I&rsquo;d have to do extra logic in Python</li>
2019-11-28 16:30:45 +01:00
<li>Maybe we can use one &ldquo;items&rdquo; table with defaults values and UPSERT (aka insert&hellip; on conflict &hellip; do update):</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>sqlite&gt; CREATE TABLE items(id INT PRIMARY KEY, views INT DEFAULT 0, downloads INT DEFAULT 0);
2018-09-24 12:40:59 +02:00
sqlite&gt; INSERT INTO items(id, views) VALUES(0, 52);
sqlite&gt; INSERT INTO items(id, downloads) VALUES(1, 171);
sqlite&gt; INSERT INTO items(id, downloads) VALUES(1, 176) ON CONFLICT(id) DO UPDATE SET downloads=176;
sqlite&gt; INSERT INTO items(id, views) VALUES(0, 78) ON CONFLICT(id) DO UPDATE SET views=78;
sqlite&gt; INSERT INTO items(id, views) VALUES(0, 3) ON CONFLICT(id) DO UPDATE SET downloads=3;
sqlite&gt; INSERT INTO items(id, views) VALUES(0, 7) ON CONFLICT(id) DO UPDATE SET downloads=excluded.views;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>This totally works!</li>
2020-01-27 15:20:44 +01:00
<li>Note the special <code>excluded.views</code> form! See <a href="https://www.sqlite.org/lang_UPSERT.html">SQLite&rsquo;s lang_UPSERT documentation</a></li>
<li>Oh nice, I finally finished the Falcon API route to page through all the results using SQLite&rsquo;s amazing <code>LIMIT</code> and <code>OFFSET</code> support</li>
<li>But when I deployed it on my Ubuntu 16.04 environment I realized Ubuntu&rsquo;s SQLite is old and doesn&rsquo;t support <code>UPSERT</code>, so my indexing doesn&rsquo;t work&hellip;</li>
2019-11-28 16:30:45 +01:00
<li>Apparently <code>UPSERT</code> came in SQLite 3.24.0 (2018-06-04), and Ubuntu 16.04 has 3.11.0</li>
<li>Ok this is hilarious, I manually downloaded the <a href="https://packages.ubuntu.com/cosmic/libsqlite3-0">libsqlite3 3.24.0 deb from Ubuntu 18.10 &ldquo;cosmic&rdquo;</a> and installed it in Ubnutu 16.04 and now the Python <code>indexer.py</code> works</li>
<li>This is definitely a dirty hack, but the list of packages we use that depend on <code>libsqlite3-0</code> in Ubuntu 16.04 are actually pretty few:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># apt-cache rdepends --installed libsqlite3-0 | sort | uniq
2019-11-28 16:30:45 +01:00
gnupg2
libkrb5-26-heimdal
libnss3
libpython2.7-stdlib
libpython3.5-stdlib
</code></pre><ul>
<li>I wonder if I could work around this by detecting the SQLite library version, for example on Ubuntu 16.04 after I replaced the library:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># python3
2018-09-25 01:24:43 +02:00
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on linux
Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information.
&gt;&gt;&gt; import sqlite3
&gt;&gt;&gt; print(sqlite3.sqlite_version)
3.24.0
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Or maybe I should just bite the bullet and migrate this to PostgreSQL, as it <a href="https://wiki.postgresql.org/wiki/UPSERT">supports <code>UPSERT</code> since version 9.5</a> and also seems to have my new favorite <code>LIMIT</code> and <code>OFFSET</code></li>
<li>I changed the syntax of the SQLite stuff and PostgreSQL is working flawlessly with psycopg2&hellip; hmmm.</li>
<li>For reference, creating a PostgreSQL database for testing this locally (though <code>indexer.py</code> will create the table):</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ createdb -h localhost -U postgres -O dspacestatistics --encoding=UNICODE dspacestatistics
2018-09-25 10:33:05 +02:00
$ createuser -h localhost -U postgres --pwprompt dspacestatistics
$ psql -h localhost -U postgres dspacestatistics
dspacestatistics=&gt; CREATE TABLE IF NOT EXISTS items
dspacestatistics-&gt; (id INT PRIMARY KEY, views INT DEFAULT 0, downloads INT DEFAULT 0)
2019-12-17 13:49:24 +01:00
</code></pre><h2 id="2018-09-25">2018-09-25</h2>
2018-09-25 18:05:02 +02:00
<ul>
<li>I deployed the DSpace statistics API on CGSpace, but when I ran the indexer it wanted to index 180,000 pages of item views</li>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;m not even sure how that&rsquo;s possible, as we only have 74,000 items!</li>
2018-09-25 18:05:02 +02:00
<li>I need to inspect the <code>id</code> values that are returned for views and cross check them with the <code>owningItem</code> values for bitstream downloads&hellip;</li>
2020-01-27 15:20:44 +01:00
<li>Also, I could try to check all IDs against the items table to see if they are actually items (perhaps the Solr <code>id</code> field doesn&rsquo;t correspond with <em>actual</em> DSpace items?)</li>
<li>I want to purge the bot hits from the Solr statistics core, as I am now realizing that I don&rsquo;t give a shit about tens of millions of hits by Google and Bing indexing my shit every day (at least not in Solr!)</li>
<li>CGSpace&rsquo;s Solr core has 150,000,000 documents in it&hellip; and it&rsquo;s still pretty fast to query, but it&rsquo;s really a maintenance and backup burden</li>
<li>DSpace Test currently has about 2,000,000 documents with <code>isBot:true</code> in its Solr statistics core, and the size on disk is 2GB (it&rsquo;s not much, but I have to test this somewhere!)</li>
2020-04-13 16:24:05 +02:00
<li>According to the <a href="https://wiki.lyrasis.org/display/DSDOC5x/SOLR+Statistics+Maintenance">DSpace 5.x Solr documentation</a> I can use <code>dspace stats-util -f</code>, so let&rsquo;s try it:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ dspace stats-util -f
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>The command comes back after a few seconds and I still see 2,000,000 documents in the statistics core with <code>isBot:true</code></li>
2020-01-27 15:20:44 +01:00
<li>I was just writing a message to the dspace-tech mailing list and then I decided to check the number of bot view events on DSpace Test again, and now it&rsquo;s 201 instead of 2,000,000, and statistics core is only 30MB now!</li>
2019-11-28 16:30:45 +01:00
<li>I will set the <code>logBots = false</code> property in <code>dspace/config/modules/usage-statistics.cfg</code> on DSpace Test and check if the number of <code>isBot:true</code> events goes up any more&hellip;</li>
<li>I restarted the server with <code>logBots = false</code> and after it came back up I see 266 events with <code>isBots:true</code> (maybe they were buffered)&hellip; I will check again tomorrow</li>
2020-01-27 15:20:44 +01:00
<li>After a few hours I see there are still only 266 view events with <code>isBot:true</code> on DSpace Test&rsquo;s Solr statistics core, so I&rsquo;m definitely going to deploy this on CGSpace soon</li>
<li>Also, CGSpace currently has 60,089,394 view events with <code>isBot:true</code> in it&rsquo;s Solr statistics core and it is 124GB!</li>
2019-11-28 16:30:45 +01:00
<li>Amazing! After running <code>dspace stats-util -f</code> on CGSpace the Solr statistics core went from 124GB to 60GB, and now there are only 700 events with <code>isBot:true</code> so I should really disable logging of bot events!</li>
2020-01-27 15:20:44 +01:00
<li>I&rsquo;m super curious to see how the JVM heap usage changes&hellip;</li>
2019-11-28 16:30:45 +01:00
<li>I made (and merged) a pull request to disable bot logging on the <code>5_x-prod</code> branch (<a href="https://github.com/ilri/DSpace/pull/387">#387</a>)</li>
2020-01-27 15:20:44 +01:00
<li>Now I&rsquo;m wondering if there are other bot requests that aren&rsquo;t classified as bots because the IP lists or user agents are outdated</li>
2019-11-28 16:30:45 +01:00
<li>DSpace ships a list of spider IPs, for example: <code>config/spiders/iplists.com-google.txt</code></li>
2020-01-27 15:20:44 +01:00
<li>I checked the list against all the IPs we&rsquo;ve seen using the &ldquo;Googlebot&rdquo; useragent on CGSpace&rsquo;s nginx access logs</li>
2019-11-28 16:30:45 +01:00
<li>The first thing I learned is that shit tons of IPs in Russia, Ukraine, Ireland, Brazil, Portugal, the US, Canada, etc are pretending to be &ldquo;Googlebot&rdquo;&hellip;</li>
<li>According to the <a href="https://support.google.com/webmasters/answer/80553">Googlebot FAQ</a> the domain name in the reverse DNS lookup should contain either <code>googlebot.com</code> or <code>google.com</code></li>
<li>In Solr this appears to be an appropriate query that I can maybe use later (returns 81,000 documents):</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>*:* AND (dns:*googlebot.com. OR dns:*google.com.) AND isBot:false
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>I translate that into a delete command using the <code>/update</code> handler:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>http://localhost:8081/solr/statistics/update?commit=true&amp;stream.body=&lt;delete&gt;&lt;query&gt;*:*+AND+(dns:*googlebot.com.+OR+dns:*google.com.)+AND+isBot:false&lt;/query&gt;&lt;/delete&gt;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>And magically all those 81,000 documents are gone!</li>
<li>After a few hours the Solr statistics core is down to 44GB on CGSpace!</li>
2020-01-27 15:20:44 +01:00
<li>I did a <em>major</em> refactor and logic fix in the DSpace Statistics API&rsquo;s <code>indexer.py</code></li>
2019-11-28 16:30:45 +01:00
<li>Basically, it turns out that using <code>facet.mincount=1</code> is really beneficial for me because it reduces the size of the Solr result set, reduces the amount of data we need to ingest into PostgreSQL, and the API returns HTTP 404 Not Found for items without views or downloads anyways</li>
<li>I deployed the new version on CGSpace and now it looks pretty good!</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>Indexing item views (page 28 of 753)
2018-09-26 02:24:46 +02:00
...
Indexing item downloads (page 260 of 260)
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>And now it&rsquo;s fast as hell due to the muuuuch smaller Solr statistics core</li>
2018-09-25 18:05:02 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-26">2018-09-26</h2>
2018-09-26 08:54:45 +02:00
<ul>
<li>Linode emailed to say that CGSpace (linode18) was using 30Mb/sec of outward bandwidth for two hours around midnight</li>
2020-01-27 15:20:44 +01:00
<li>I don&rsquo;t see anything unusual in the nginx logs, so perhaps it was the cron job that syncs the Solr database to Amazon S3?</li>
2018-09-26 08:54:45 +02:00
<li>It could be that the bot purge yesterday changed the core significantly so there was a lot to change?</li>
2020-01-27 15:20:44 +01:00
<li>I don&rsquo;t see any drop in JVM heap size in CGSpace&rsquo;s munin stats since I did the Solr cleanup, but this looks pretty good:</li>
2018-09-26 16:19:13 +02:00
</ul>
2019-11-28 16:30:45 +01:00
<p><img src="/cgspace-notes/2018/09/tomcat_maxtime-week.png" alt="Tomcat max processing time week"></p>
2018-09-26 16:19:13 +02:00
<ul>
<li>I will have to keep an eye on that over the next few weeks to see if things stay as they are</li>
2019-11-28 16:30:45 +01:00
<li>I did a batch replacement of the access rights with my <a href="https://gist.github.com/alanorth/df92cbfb54d762ba21b28f7cd83b6897">fix-metadata-values.py</a> script on DSpace Test:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ ./fix-metadata-values.py -i /tmp/fix-access-status.csv -db dspace -u dspace -p 'fuuu' -f cg.identifier.status -t correct -m 206
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>This changes &ldquo;Open Access&rdquo; to &ldquo;Unrestricted Access&rdquo; and &ldquo;Limited Access&rdquo; to &ldquo;Restricted Access&rdquo;</li>
<li>After that I did a full Discovery reindex:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ time schedtool -D -e ionice -c2 -n7 nice -n19 dspace index-discovery -b
2018-09-26 19:13:29 +02:00
real 77m3.755s
user 7m39.785s
sys 2m18.485s
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I told Peter it&rsquo;s better to do the access rights before the usage rights because the git branches are conflicting with each other and it&rsquo;s actually a pain in the ass to keep changing the values as we discuss, rebase, merge, fix conflicts&hellip;</li>
2019-11-28 16:30:45 +01:00
<li>Udana and Mia from WLE were asking some questions about their <a href="https://feeds.feedburner.com/WLEcgspace">WLE Feedburner feed</a></li>
2020-01-27 15:20:44 +01:00
<li>It&rsquo;s pretty confusing, because until recently they were entering issue dates as only YYYY (like 2018) and their feeds were all showing items in the wrong order</li>
<li>I&rsquo;m not exactly sure what their problem now is, though (confusing)</li>
<li>I updated the dspace-statistiscs-api to use psycopg2&rsquo;s <code>execute_values()</code> to insert batches of 100 values into PostgreSQL instead of doing every insert individually</li>
2019-11-28 16:30:45 +01:00
<li>On CGSpace this reduces the total run time of <code>indexer.py</code> from 432 seconds to 400 seconds (most of the time is actually spent in getting the data from Solr though)</li>
2018-09-26 08:54:45 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-27">2018-09-27</h2>
2018-09-27 09:52:23 +02:00
<ul>
2020-01-27 15:20:44 +01:00
<li>Linode emailed to say that CGSpace&rsquo;s (linode19) CPU load was high for a few hours last night</li>
2019-11-28 16:30:45 +01:00
<li>Looking in the nginx logs around that time I see some new IPs that look like they are harvesting things:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code># zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E &quot;26/Sep/2018:(19|20|21)&quot; | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
295 34.218.226.147
296 66.249.64.95
350 157.55.39.185
359 207.46.13.28
371 157.55.39.85
388 40.77.167.148
444 66.249.64.93
544 68.6.87.12
834 66.249.64.91
902 35.237.175.180
</code></pre><ul>
<li><code>35.237.175.180</code> is on Google Cloud</li>
<li><code>68.6.87.12</code> is on Cox Communications in the US (?)</li>
<li>These hosts are not using proper user agents and are not re-using their Tomcat sessions:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ grep -c -E 'session_id=[A-Z0-9]{32}:ip_addr=35.237.175.180' dspace.log.2018-09-26 | sort | uniq
2018-09-27 09:52:23 +02:00
5423
$ grep -c -E 'session_id=[A-Z0-9]{32}:ip_addr=68.6.87.12' dspace.log.2018-09-26 | sort | uniq
758
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>I will add their IPs to the list of bad bots in nginx so we can add a &ldquo;bot&rdquo; user agent to them and let Tomcat&rsquo;s Crawler Session Manager Valve handle them</li>
2019-11-28 16:30:45 +01:00
<li>I asked Atmire to prepare an invoice for 125 credits</li>
2018-09-27 09:52:23 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-29">2018-09-29</h2>
2018-09-29 14:00:03 +02:00
<ul>
<li>I merged some changes to author affiliations from Sisay as well as some corrections to organizational names using smart quotes like <code>Université dAbomey Calavi</code> (<a href="https://github.com/ilri/DSpace/pull/388">#388</a>)</li>
<li>Peter sent me a list of 43 author names to fix, but it had some encoding errors like <code>Belalcázar, John</code> like usual (I will tell him to stop trying to export as UTF-8 because it never seems to work)</li>
2019-11-28 16:30:45 +01:00
<li>I did batch replaces for both on CGSpace with my <a href="https://gist.github.com/alanorth/df92cbfb54d762ba21b28f7cd83b6897">fix-metadata-values.py</a> script:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ ./fix-metadata-values.py -i 2018-09-29-fix-affiliations.csv -db dspace -u dspace -p 'fuuu' -f cg.contributor.affiliation -t correct -m 211
2018-09-29 14:00:03 +02:00
$ ./fix-metadata-values.py -i 2018-09-29-fix-authors.csv -db dspace -u dspace -p 'fuuu' -f dc.contributor.author -t correct -m 3
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Afterwards I started a full Discovery re-index:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>$ time schedtool -D -e ionice -c2 -n7 nice -n19 dspace index-discovery -b
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Linode sent an alert that both CGSpace and DSpace Test were using high CPU for the last two hours</li>
<li>It seems to be Moayad trying to do the AReS explorer indexing</li>
<li>He was sending too many (5 or 10) concurrent requests to the server, but still&hellip; why is this shit so slow?!</li>
2018-09-29 18:00:41 +02:00
</ul>
2019-12-17 13:49:24 +01:00
<h2 id="2018-09-30">2018-09-30</h2>
2018-09-30 07:23:48 +02:00
<ul>
<li>Valerio keeps sending items on CGSpace that have weird or incorrect languages, authors, etc</li>
2019-11-28 16:30:45 +01:00
<li>I think I should just batch export and update all languages&hellip;</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>dspace=# \copy (select distinct text_value, count(*) from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'language' and qualifier = 'iso') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/2018-09-30-languages.csv with csv;
2019-11-28 16:30:45 +01:00
</code></pre><ul>
2020-01-27 15:20:44 +01:00
<li>Then I can simply delete the &ldquo;Other&rdquo; and &ldquo;other&rdquo; ones because that&rsquo;s not useful at all:</li>
2019-11-28 16:30:45 +01:00
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>dspace=# DELETE FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'language' and qualifier = 'iso') AND text_value='Other';
2018-09-30 07:23:48 +02:00
DELETE 6
dspace=# DELETE FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'language' and qualifier = 'iso') AND text_value='other';
DELETE 79
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Looking through the list I see some weird language codes like <code>gh</code>, so I checked out those items:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>dspace=# SELECT resource_id FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'language' and qualifier = 'iso') AND text_value='gh';
2019-11-28 16:30:45 +01:00
resource_id
2018-09-30 07:23:48 +02:00
-------------
2019-11-28 16:30:45 +01:00
94530
94529
2018-09-30 07:23:48 +02:00
dspace=# SELECT handle,item_id FROM item, handle WHERE handle.resource_type_id=2 AND handle.resource_id = item.item_id AND handle.resource_id in (94530, 94529);
2019-11-28 16:30:45 +01:00
handle | item_id
2018-09-30 07:23:48 +02:00
-------------+---------
2019-11-28 16:30:45 +01:00
10568/91386 | 94529
10568/91387 | 94530
</code></pre><ul>
<li>Those items are from Ghana, so the submitter apparently thought <code>gh</code> was a language&hellip; I can safely delete them:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>dspace=# DELETE FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'language' and qualifier = 'iso') AND text_value='gh';
2018-09-30 07:23:48 +02:00
DELETE 2
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>The next issue would be <code>jn</code>:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>dspace=# SELECT resource_id FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'language' and qualifier = 'iso') AND text_value='jn';
2019-11-28 16:30:45 +01:00
resource_id
2018-09-30 07:23:48 +02:00
-------------
2019-11-28 16:30:45 +01:00
94001
94003
2018-09-30 07:23:48 +02:00
dspace=# SELECT handle,item_id FROM item, handle WHERE handle.resource_type_id=2 AND handle.resource_id = item.item_id AND handle.resource_id in (94001, 94003);
2019-11-28 16:30:45 +01:00
handle | item_id
2018-09-30 07:23:48 +02:00
-------------+---------
2019-11-28 16:30:45 +01:00
10568/90868 | 94001
10568/90870 | 94003
</code></pre><ul>
<li>Those items are about Japan, so I will update them to be <code>ja</code></li>
<li>Other replacements:</li>
</ul>
2021-09-13 15:21:16 +02:00
<pre tabindex="0"><code>DELETE FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'language' and qualifier = 'iso') AND text_value='gh';
2018-09-30 07:23:48 +02:00
UPDATE metadatavalue SET text_value='fr' WHERE resource_type_id=2 AND metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'language' and qualifier = 'iso') AND text_value='fn';
UPDATE metadatavalue SET text_value='hi' WHERE resource_type_id=2 AND metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'language' and qualifier = 'iso') AND text_value='in';
UPDATE metadatavalue SET text_value='ja' WHERE resource_type_id=2 AND metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'language' and qualifier = 'iso') AND text_value='Ja';
UPDATE metadatavalue SET text_value='ja' WHERE resource_type_id=2 AND metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'language' and qualifier = 'iso') AND text_value='jn';
UPDATE metadatavalue SET text_value='ja' WHERE resource_type_id=2 AND metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'language' and qualifier = 'iso') AND text_value='jp';
2019-11-28 16:30:45 +01:00
</code></pre><ul>
<li>Then there are 12 items with <code>en|hi</code>, but they were all in one collection so I just exported it as a CSV and then re-imported the corrected metadata</li>
2018-09-30 07:23:48 +02:00
</ul>
2019-11-28 16:30:45 +01:00
<!-- raw HTML omitted -->
2018-09-13 15:15:01 +02:00
2018-09-02 11:03:43 +02:00
</article>
</div> <!-- /.blog-main -->
<aside class="col-sm-3 ml-auto blog-sidebar">
<section class="sidebar-module">
<h4>Recent Posts</h4>
<ol class="list-unstyled">
2021-12-03 11:58:43 +01:00
<li><a href="/cgspace-notes/2021-12/">December, 2021</a></li>
2021-11-01 09:49:21 +01:00
<li><a href="/cgspace-notes/2021-11/">November, 2021</a></li>
<li><a href="/cgspace-notes/2021-10/">October, 2021</a></li>
2021-09-02 16:21:48 +02:00
<li><a href="/cgspace-notes/2021-09/">September, 2021</a></li>
2021-08-01 15:19:05 +02:00
<li><a href="/cgspace-notes/2021-08/">August, 2021</a></li>
2018-09-02 11:03:43 +02:00
</ol>
</section>
<section class="sidebar-module">
<h4>Links</h4>
<ol class="list-unstyled">
<li><a href="https://cgspace.cgiar.org">CGSpace</a></li>
<li><a href="https://dspacetest.cgiar.org">DSpace Test</a></li>
<li><a href="https://github.com/ilri/DSpace">CGSpace @ GitHub</a></li>
</ol>
</section>
</aside>
</div> <!-- /.row -->
</div> <!-- /.container -->
<footer class="blog-footer">
<p dir="auto">
2018-09-02 11:03:43 +02:00
Blog template created by <a href="https://twitter.com/mdo">@mdo</a>, ported to Hugo by <a href='https://twitter.com/mralanorth'>@mralanorth</a>.
</p>
<p>
<a href="#">Back to top</a>
</p>
</footer>
</body>
</html>