2018-06-04
- Test the DSpace 5.8 module upgrades from Atmire (#378)
- There seems to be a problem with the CUA and L&R versions in
pom.xml
because they are using SNAPSHOT and it doesn’t build
- I added the new CCAFS Phase II Project Tag
PII-FP1_PACCA2
and merged it into the 5_x-prod
branch (#379)
- I proofed and tested the ILRI author corrections that Peter sent back to me this week:
$ ./fix-metadata-values.py -i /tmp/2018-05-30-Correct-660-authors.csv -db dspace -u dspace -p 'fuuu' -f dc.contributor.author -t correct -m 3 -n
- I think a sane proofing workflow in OpenRefine is to apply the custom text facets for check/delete/remove and illegal characters that I developed in March, 2018
- Time to index ~70,000 items on CGSpace:
$ time schedtool -D -e ionice -c2 -n7 nice -n19 [dspace]/bin/dspace index-discovery -b
real 74m42.646s
user 8m5.056s
sys 2m7.289s
Read more →
2018-05-01
- I cleared the Solr statistics core on DSpace Test by issuing two commands directly to the Solr admin interface:
- Then I reduced the JVM heap size from 6144 back to 5120m
- Also, I switched it to use OpenJDK instead of Oracle Java, as well as re-worked the Ansible infrastructure scripts to support hosts choosing which distribution they want to use
Read more →
2018-04-01
- I tried to test something on DSpace Test but noticed that it’s down since god knows when
- Catalina logs at least show some memory errors yesterday:
Read more →
2018-03-02
- Export a CSV of the IITA community metadata for Martin Mueller
Read more →
2018-02-01
- Peter gave feedback on the
dc.rights
proof of concept that I had sent him last week
- We don’t need to distinguish between internal and external works, so that makes it just a simple list
- Yesterday I figured out how to monitor DSpace sessions using JMX
- I copied the logic in the
jmx_tomcat_dbpools
provided by Ubuntu’s munin-plugins-java
package and used the stuff I discovered about JMX in 2018-01
Read more →
2018-01-02
- Uptime Robot noticed that CGSpace went down and up a few times last night, for a few minutes each time
- I didn’t get any load alerts from Linode and the REST and XMLUI logs don’t show anything out of the ordinary
- The nginx logs show HTTP 200s until
02/Jan/2018:11:27:17 +0000
when Uptime Robot got an HTTP 500
- In dspace.log around that time I see many errors like “Client closed the connection before file download was complete”
- And just before that I see this:
Caused by: org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-127.0.0.1-8443-exec-980] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:50; busy:50; idle:0; lastwait:5000].
- Ah hah! So the pool was actually empty!
- I need to increase that, let’s try to bump it up from 50 to 75
- After that one client got an HTTP 499 but then the rest were HTTP 200, so I don’t know what the hell Uptime Robot saw
- I notice this error quite a few times in dspace.log:
2018-01-02 01:21:19,137 ERROR org.dspace.app.xmlui.aspect.discovery.SidebarFacetsTransformer @ Error while searching for sidebar facets
org.dspace.discovery.SearchServiceException: org.apache.solr.search.SyntaxError: Cannot parse 'dateIssued_keyword:[1976+TO+1979]': Encountered " "]" "] "" at line 1, column 32.
- And there are many of these errors every day for the past month:
$ grep -c "Error while searching for sidebar facets" dspace.log.*
dspace.log.2017-11-21:4
dspace.log.2017-11-22:1
dspace.log.2017-11-23:4
dspace.log.2017-11-24:11
dspace.log.2017-11-25:0
dspace.log.2017-11-26:1
dspace.log.2017-11-27:7
dspace.log.2017-11-28:21
dspace.log.2017-11-29:31
dspace.log.2017-11-30:15
dspace.log.2017-12-01:15
dspace.log.2017-12-02:20
dspace.log.2017-12-03:38
dspace.log.2017-12-04:65
dspace.log.2017-12-05:43
dspace.log.2017-12-06:72
dspace.log.2017-12-07:27
dspace.log.2017-12-08:15
dspace.log.2017-12-09:29
dspace.log.2017-12-10:35
dspace.log.2017-12-11:20
dspace.log.2017-12-12:44
dspace.log.2017-12-13:36
dspace.log.2017-12-14:59
dspace.log.2017-12-15:104
dspace.log.2017-12-16:53
dspace.log.2017-12-17:66
dspace.log.2017-12-18:83
dspace.log.2017-12-19:101
dspace.log.2017-12-20:74
dspace.log.2017-12-21:55
dspace.log.2017-12-22:66
dspace.log.2017-12-23:50
dspace.log.2017-12-24:85
dspace.log.2017-12-25:62
dspace.log.2017-12-26:49
dspace.log.2017-12-27:30
dspace.log.2017-12-28:54
dspace.log.2017-12-29:68
dspace.log.2017-12-30:89
dspace.log.2017-12-31:53
dspace.log.2018-01-01:45
dspace.log.2018-01-02:34
- Danny wrote to ask for help renewing the wildcard ilri.org certificate and I advised that we should probably use Let’s Encrypt if it’s just a handful of domains
Read more →
2017-12-01
- Uptime Robot noticed that CGSpace went down
- The logs say “Timeout waiting for idle object”
- PostgreSQL activity says there are 115 connections currently
- The list of connections to XMLUI and REST API for today:
Read more →
2017-11-01
- The CORE developers responded to say they are looking into their bot not respecting our robots.txt
2017-11-02
- Today there have been no hits by CORE and no alerts from Linode (coincidence?)
# grep -c "CORE" /var/log/nginx/access.log
0
- Generate list of authors on CGSpace for Peter to go through and correct:
dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors.csv with csv;
COPY 54701
Read more →
2017-10-01
http://hdl.handle.net/10568/78495||http://hdl.handle.net/10568/79336
- There appears to be a pattern but I’ll have to look a bit closer and try to clean them up automatically, either in SQL or in OpenRefine
- Add Katherine Lutz to the groups for content submission and edit steps of the CGIAR System collections
Read more →
Rough notes for importing the CGIAR Library content. It was decided that this content would go to a new top-level community called CGIAR System Organization.
Read more →