- The nginx logs show HTTP 200s until `02/Jan/2018:11:27:17 +0000` when Uptime Robot got an HTTP 500
- In dspace.log around that time I see many errors like "Client closed the connection before file download was complete"
- And just before that I see this:
```
Caused by: org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-127.0.0.1-8443-exec-980] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:50; busy:50; idle:0; lastwait:5000].
```
- Ah hah! So the pool was actually empty!
- I need to increase that, let's try to bump it up from 50 to 75
- After that one client got an HTTP 499 but then the rest were HTTP 200, so I don't know what the hell Uptime Robot saw
- I notice this error quite a few times in dspace.log:
```
2018-01-02 01:21:19,137 ERROR org.dspace.app.xmlui.aspect.discovery.SidebarFacetsTransformer @ Error while searching for sidebar facets
org.dspace.discovery.SearchServiceException: org.apache.solr.search.SyntaxError: Cannot parse 'dateIssued_keyword:[1976+TO+1979]': Encountered " "]" "] "" at line 1, column 32.
```
- And there are many of these errors every day for the past month:
```
$ grep -c "Error while searching for sidebar facets" dspace.log.*
dspace.log.2017-11-21:4
dspace.log.2017-11-22:1
dspace.log.2017-11-23:4
dspace.log.2017-11-24:11
dspace.log.2017-11-25:0
dspace.log.2017-11-26:1
dspace.log.2017-11-27:7
dspace.log.2017-11-28:21
dspace.log.2017-11-29:31
dspace.log.2017-11-30:15
dspace.log.2017-12-01:15
dspace.log.2017-12-02:20
dspace.log.2017-12-03:38
dspace.log.2017-12-04:65
dspace.log.2017-12-05:43
dspace.log.2017-12-06:72
dspace.log.2017-12-07:27
dspace.log.2017-12-08:15
dspace.log.2017-12-09:29
dspace.log.2017-12-10:35
dspace.log.2017-12-11:20
dspace.log.2017-12-12:44
dspace.log.2017-12-13:36
dspace.log.2017-12-14:59
dspace.log.2017-12-15:104
dspace.log.2017-12-16:53
dspace.log.2017-12-17:66
dspace.log.2017-12-18:83
dspace.log.2017-12-19:101
dspace.log.2017-12-20:74
dspace.log.2017-12-21:55
dspace.log.2017-12-22:66
dspace.log.2017-12-23:50
dspace.log.2017-12-24:85
dspace.log.2017-12-25:62
dspace.log.2017-12-26:49
dspace.log.2017-12-27:30
dspace.log.2017-12-28:54
dspace.log.2017-12-29:68
dspace.log.2017-12-30:89
dspace.log.2017-12-31:53
dspace.log.2018-01-01:45
dspace.log.2018-01-02:34
```
- Danny wrote to ask for help renewing the wildcard ilri.org certificate and I advised that we should probably use Let's Encrypt if it's just a handful of domains
- I woke up to more up and down of CGSpace, this time UptimeRobot noticed a few rounds of up and down of a few minutes each and Linode also notified of high CPU load from 12 to 2 PM
- Looks like I need to increase the database pool size again:
```
$ grep -c "Timeout: Pool empty." dspace.log.2018-01-*
dspace.log.2018-01-01:0
dspace.log.2018-01-02:1972
dspace.log.2018-01-03:1909
```
- For some reason there were a lot of "active" connections last night:
- I have no idea what these are but they seem to be coming from Amazon...
- I guess for now I just have to increase the database connection pool's max active
- It's currently 75 and normally I'd just bump it by 25 but let me be a bit daring and push it by 50 to 125, because I used to see at least 121 connections in pg_stat_activity before when we were using the shitty default pooling
org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-127.0.0.1-8443-exec-256] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:125; busy:125; idle:0; lastwait:5000].
```
- So for this week that is the number one problem!
```
$ grep -c "Timeout: Pool empty." dspace.log.2018-01-*
dspace.log.2018-01-01:0
dspace.log.2018-01-02:1972
dspace.log.2018-01-03:1909
dspace.log.2018-01-04:1559
```
- I will just bump the connection limit to 300 because I'm fucking fed up with this shit
- Once I get back to Amman I will have to try to create different database pools for different web applications, like recently discussed on the dspace-tech mailing list
- Daniel asked for help with their DAGRIS server (linode2328112) that has no disk space
- I had a look and there is one Apache 2 log file that is 73GB, with lots of this:
```
[Fri Jan 05 09:31:22.965398 2018] [:error] [pid 9340] [client 213.55.99.121:64476] WARNING: Unable to find a match for "9-16-1-RV.doc" in "/home/files/journals/6//articles/9/". Skipping this file., referer: http://dagris.info/reviewtool/index.php/index/install/upgrade
```
- I will delete the log file for now and tell Danny
- Also, I'm still seeing a hundred or so of the "ERROR org.dspace.app.xmlui.aspect.discovery.SidebarFacetsTransformer" errors in dspace logs, I need to search the dspace-tech mailing list to see what the cause is
- Reboot CGSpace and DSpace Test for new kernels (4.14.12-x86_64-linode92) that partially mitigate the [Spectre and Meltdown CPU vulnerabilities](https://blog.linode.com/2018/01/03/cpu-vulnerabilities-meltdown-spectre/)
- Generate a list of author affiliations for Peter to clean up:
```
dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'affiliation') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/affiliations.csv with csv;
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:867)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
... 10 more
Caused by: org.apache.http.client.NonRepeatableRequestException: Cannot retry request with a non-repeatable request entity. The cause lists the reason the original request failed.
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:659)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:867)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
... 10 more
```
- There is interesting documentation about this on the DSpace Wiki: https://wiki.duraspace.org/display/DSDOC5x/SOLR+Statistics+Maintenance#SOLRStatisticsMaintenance-SolrShardingByYear
- I'm looking to see maybe if we're hitting the issues mentioned in [DS-2212](https://jira.duraspace.org/browse/DS-2212) that were apparently fixed in DSpace 5.2
- I can apparently search for records in the Solr stats core that have an empty `owningColl` field using this in the Solr admin query: `-owningColl:*`
- On CGSpace I see 48,000,000 records that have an `owningColl` field and 34,000,000 that don't:
- I tested the `dspace stats-util -s` process on my local machine and it failed the same way
- It doesn't seem to be helpful, but the dspace log shows this:
```
2018-01-10 10:51:19,301 INFO org.dspace.statistics.SolrLogger @ Created core with name: statistics-2016
2018-01-10 10:51:19,301 INFO org.dspace.statistics.SolrLogger @ Moving: 3821 records into core statistics-2016
```
- Terry Brady has written some notes on the DSpace Wiki about Solr sharing issues: https://wiki.duraspace.org/display/%7Eterrywbrady/Statistics+Import+Export+Issues
- Uptime Robot said that CGSpace went down at around 9:43 AM
- I looked at PostgreSQL's `pg_stat_activity` table and saw 161 active connections, but no pool errors in the DSpace logs:
```
$ grep -c "Timeout: Pool empty." dspace.log.2018-01-10
0
```
- The XMLUI logs show quite a bit of activity today:
- Rather than blocking their IPs, I think I might just add their user agent to the "badbots" zone with Baidu, because they seem to be the only ones using that user agent:
```
# cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 | grep "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111 Safari
- I added the user agent to nginx's badbots limit req zone but upon testing the config I got an error:
```
# nginx -t
nginx: [emerg] could not build map_hash, you should increase map_hash_bucket_size: 64
nginx: configuration file /etc/nginx/nginx.conf test failed
```
- According to nginx docs the [bucket size should be a multiple of the CPU's cache alignment](https://nginx.org/en/docs/hash.html), which is 64 for us:
```
# cat /proc/cpuinfo | grep cache_alignment | head -n1
cache_alignment : 64
```
- On our servers that is 64, so I increased this parameter to 128 and deployed the changes to nginx
- Almost immediately the PostgreSQL connections dropped back down to 40 or so, and UptimeRobot said the site was back up
- So that's interesting that we're not out of PostgreSQL connections (current pool maxActive is 300!) but the system is "down" to UptimeRobot and very slow to use
- Following up with the Solr sharding issue on the dspace-tech mailing list, I noticed this interesting snippet in the Tomcat `localhost_access_log` at the time of my sharding attempt on my test machine:
- So theoretically I could name each connection "xmlui" or "dspaceWeb" or something meaningful and it would show up in PostgreSQL's `pg_stat_activity` table!
- This would be super helpful for figuring out where load was coming from (now I wonder if I could figure out how to graph this)
- Also, I realized that the `db.jndi` parameter in dspace.cfg needs to match the `name` value in your applicaiton's context—not the `global` one
- Ah hah! Also, I can name the default DSpace connection pool in dspace.cfg as well, like:
- I'm looking at the [DSpace 6.0 Install docs](https://wiki.duraspace.org/display/DSDOC6x/Installing+DSpace#InstallingDSpace-ServletEngine(ApacheTomcat7orlater,Jetty,CauchoResinorequivalent)) and notice they tweak the number of threads in their Tomcat connector:
```
<!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
<Connectorport="8080"
maxThreads="150"
minSpareThreads="25"
maxSpareThreads="75"
enableLookups="false"
redirectPort="8443"
acceptCount="100"
connectionTimeout="20000"
disableUploadTimeout="true"
URIEncoding="UTF-8"/>
```
- In Tomcat 8.5 the `maxThreads` defaults to 200 which is probably fine, but tweaking `minSpareThreads` could be good
- I don't see a setting for `maxSpareThreads` in the docs so that might be an error
- Looks like in Tomcat 8.5 the default URIEncoding for Connectors is UTF-8, so we don't need to specify that manually anymore: https://tomcat.apache.org/tomcat-8.5-doc/config/http.html
The number of threads to be used to accept connections. Increase this value on a multi CPU machine, although you would never really need more than 2. Also, with a lot of non keep alive connections, you might want to increase this value as well. Default value is 1.
13-Jan-2018 13:59:05.245 WARNING [main] org.apache.tomcat.dbcp.dbcp2.BasicDataSourceFactory.getObjectInstance Name = dspace6 Property maxActive is not used in DBCP2, use maxTotal instead. maxTotal default value is 8. You have set value of "35" for "maxActive" property, which is being ignored.
13-Jan-2018 13:59:05.245 WARNING [main] org.apache.tomcat.dbcp.dbcp2.BasicDataSourceFactory.getObjectInstance Name = dspace6 Property maxWait is not used in DBCP2 , use maxWaitMillis instead. maxWaitMillis default value is -1. You have set value of "5000" for "maxWait" property, which is being ignored.
```
- I looked in my Tomcat 7.0.82 logs and I don't see anything about DBCP2 errors, so I guess this a Tomcat 8.0.x or 8.5.x thing
- DBCP2 appears to be Tomcat 8.0.x and up according to the [Tomcat 8.0 migration guide](https://tomcat.apache.org/migration-8.html)
- I have updated our [Ansible infrastructure scripts](https://github.com/ilri/rmg-ansible-public/commit/246f9d7b06d53794f189f0cc57ad5ddd80f0b014) so that it will be ready whenever we switch to Tomcat 8 (probably with Ubuntu 18.04 later this year)
- When I enable the ResourceLink in the ROOT.xml context I get the following error in the Tomcat localhost log:
```
13-Jan-2018 14:14:36.017 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.listenerStart Exception sending context initialized event to listener instance of class [org.dspace.app.util.DSpaceWebappListener]
java.lang.ExceptionInInitializerError
at org.dspace.app.util.AbstractDSpaceWebapp.register(AbstractDSpaceWebapp.java:74)
at org.dspace.app.util.DSpaceWebappListener.contextInitialized(DSpaceWebappListener.java:31)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4745)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5207)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:629)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1839)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at org.dspace.storage.rdbms.DatabaseUtils.updateDatabase(DatabaseUtils.java:547)
at org.dspace.core.Context.<clinit>(Context.java:103)
... 15 more
```
- Interesting blog post benchmarking Tomcat JDBC vs Apache Commons DBCP2, with configuration snippets: http://www.tugay.biz/2016/07/tomcat-connection-pool-vs-apache.html
- The Tomcat vs Apache pool thing is confusing, but apparently we're using Apache Commons DBCP2 because we don't specify `factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"` in our global resource
- So at least I know that I'm not looking for documentation or troubleshooting on the Tomcat JDBC pool!
- I looked at `pg_stat_activity` during Tomcat's startup and I see that the pool created in server.xml is indeed connecting, just that nothing uses it
- Also, the fallback connection parameters specified in local.cfg (not dspace.cfg) are used
- Shit, this might actually be a DSpace error: https://jira.duraspace.org/browse/DS-3434
- Some had multiple and he's corrected them by adding `||` in the correction column, but I can't process those this way so I will just have to flag them and do those manually later
- Also, I can flag the values that have "DELETE"
- Then I need to facet the correction column on isBlank(value) and not flagged
## 2018-01-15
- Help Udana from IWMI export a CSV from DSpace Test so he can start trying a batch upload
- I'm going to apply these ~130 corrections on CGSpace:
```
update metadatavalue set text_value='Formally Published' where resource_type_id=2 and metadata_field_id=214 and text_value like 'Formally published';
delete from metadatavalue where resource_type_id=2 and metadata_field_id=214 and text_value like 'NO';
update metadatavalue set text_value='en' where resource_type_id=2 and metadata_field_id=38 and text_value ~ '(En|English)';
update metadatavalue set text_value='fr' where resource_type_id=2 and metadata_field_id=38 and text_value ~ '(fre|frn|French)';
update metadatavalue set text_value='es' where resource_type_id=2 and metadata_field_id=38 and text_value ~ '(Spanish|spa)';
update metadatavalue set text_value='vi' where resource_type_id=2 and metadata_field_id=38 and text_value='Vietnamese';
update metadatavalue set text_value='ru' where resource_type_id=2 and metadata_field_id=38 and text_value='Ru';
update metadatavalue set text_value='in' where resource_type_id=2 and metadata_field_id=38 and text_value ~ '(IN|In)';
delete from metadatavalue where resource_type_id=2 and metadata_field_id=38 and text_value ~ '(dc.language.iso|CGIAR Challenge Program on Water and Food)';
```
- Continue proofing Peter's author corrections that I started yesterday, faceting on non blank, non flagged, and briefly scrolling through the values of the corrections to find encoding errors for French and Spanish names
dspace=# select handle from item, handle where handle.resource_id = item.item_id AND item.item_id = '4369';
handle
--------
(0 rows)
```
- Even searching in the DSpace advanced search for author equals "Tarawali" produces nothing...
- Otherwise, the [DSpace 5 SQL Helper Functions](https://wiki.duraspace.org/display/DSPACE/Helper+SQL+functions+for+DSpace+5) provide `ds5_item2itemhandle()`, which is much easier than my long query above that I always have to go search for
- For example, to find the Handle for an item that has the author "Erni":
```
dspace=# select * from metadatavalue where resource_type_id=2 and metadata_field_id=3 and text_value='Erni';
- Now I made a new list of affiliations for Peter to look through:
```
dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where metadata_schema_id = 2 and element = 'contributor' and qualifier = 'affiliation') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/affiliations.csv with csv;
COPY 4552
```
- Looking over the affiliations again I see dozens of CIAT ones with their affiliation formatted like: International Center for Tropical Agriculture (CIAT)
- For example, this one is from just last month: https://cgspace.cgiar.org/handle/10568/89930
- Our controlled vocabulary has this in the format without the abbreviation: International Center for Tropical Agriculture
- So some submitters don't know to use the controlled vocabulary lookup
- Discuss standardized names for CRPs and centers with ICARDA (don't wait for CG Core)
- Re-send DC rights implementation and forward to everyone so we can move forward with it (without the URI field for now)
- Start looking at where I was with the AGROVOC API
- Have a controlled vocabulary for CGIAR authors' names and ORCIDs? Perhaps values like: Orth, Alan S. (0000-0002-1735-7458)
- Need to find the metadata field name that ICARDA is using for their ORCIDs
- Update text for DSpace version plan on wiki
- Come up with an SLA, something like: _In return for your contribution we will, to the best of our ability, ensure 99.5% ("two and a half nines") uptime of CGSpace, ensure data is stored in open formats and safely backed up, follow CG Core metadata standards, ..._
- Add Sisay and Danny to Uptime Robot and allow them to restart Tomcat on CGSpace ✔
- I removed Tsega's SSH access to the web and DSpace servers, and asked Danny to check whether there is anything he needs from Tsega's home directories so we can delete the accounts completely
- I removed Tsega's access to Linode dashboard as well
- But I do see this strange message in the dspace log:
```
2018-01-17 07:59:25,856 INFO org.apache.http.impl.client.SystemDefaultHttpClient @ I/O exception (org.apache.http.NoHttpResponseException) caught when processing request to {}->http://localhost:8081: The target server failed to respond
2018-01-17 07:59:25,856 INFO org.apache.http.impl.client.SystemDefaultHttpClient @ Retrying request to {}->http://localhost:8081
```
- I have NEVER seen this error before, and there is no error before or after that in DSpace's solr.log
- Tomcat's catalina.out does show something interesting, though, right at that time:
- Looking at the JVM graphs from Munin it does look like the heap ran out of memory (see the blue dip just before the green spike when I restarted Tomcat):
- I'm playing with maven repository caching using Artifactory in a Docker instance: https://www.jfrog.com/confluence/display/RTF/Installing+with+Docker
- Then configure the local maven to use it in settings.xml with the settings from "Set Me Up": https://www.jfrog.com/confluence/display/RTF/Using+Artifactory
- This could be a game changer for testing and running the Docker DSpace image
- Wow, I even managed to add the Atmire repository as a remote and map it into the `libs-release` virtual repository, then tell maven to use it for `atmire.com-releases` in settings.xml!
- Hmm, some maven dependencies for the SWORDv2 web application in DSpace 5.5 are broken:
```
[ERROR] Failed to execute goal on project dspace-swordv2: Could not resolve dependencies for project org.dspace:dspace-swordv2:war:5.5: Failed to collect dependencies at org.swordapp:sword2-server:jar:classes:1.0 -> org.apache.abdera:abdera-client:jar:1.1.1 -> org.apache.abdera:abdera-core:jar:1.1.1 -> org.apache.abdera:abdera-i18n:jar:1.1.1 -> org.apache.geronimo.specs:geronimo-activation_1.0.2_spec:jar:1.1: Failed to read artifact descriptor for org.apache.geronimo.specs:geronimo-activation_1.0.2_spec:jar:1.1: Could not find artifact org.apache.geronimo.specs:specs:pom:1.1 in central (http://localhost:8081/artifactory/libs-release) -> [Help 1]
```
- I never noticed because I build with that web application disabled:
- Regarding the heap space error earlier today, it looks like it does happen a few times a week or month (I'm not sure how far these logs go back, as they are not strictly daily):
- UptimeRobot said CGSpace was down for 1 minute last night
- I don't see any errors in the nginx or catalina logs, so I guess UptimeRobot just got impatient and closed the request, which caused nginx to send an HTTP 499
- I realize I never did a full re-index after the SQL author and affiliation updates last week, so I should force one now:
- Maria from Bioversity asked if I could remove the abstracts from all of their Limited Access items in the [Bioversity Journal Articles](https://cgspace.cgiar.org/handle/10568/35501) collection
- It's easy enough to do in OpenRefine, but you have to be careful to only get those items that are uploaded into Bioversity's collection, not the ones that are mapped from others!
- Use this GREL in OpenRefine after isolating all the Limited Access items: `value.startsWith("10568/35501")`
- UptimeRobot said CGSpace went down AGAIN and both Sisay and Danny immediately logged in and restarted Tomcat without talking to me *or* each other!
- I had to cancel the Discovery indexing and I'll have to re-try it another time when the server isn't so busy (it had already taken two hours and wasn't even close to being done)
- For now I've increased the Tomcat JVM heap from 5632 to 6144m, to give ~1GB of free memory over the average usage to hopefully account for spikes caused by load or background jobs
- Linode alerted again and said that CGSpace was using 301% CPU
- Peter emailed to ask why [this item](https://cgspace.cgiar.org/handle/10568/88090) doesn't have an Altmetric badge on CGSpace but does have one on the [Altmetric dashboard](https://www.altmetric.com/details/26709041)
- Looks like our badge code calls the `handle` endpoint which doesn't exist:
- I want to document the workflow of adding a production PostgreSQL database to a development instance of [DSpace in Docker](https://github.com/alanorth/docker-dspace):
- Look over Udana's CSV of 25 WLE records from last week
- I sent him some corrections:
- The file encoding is Windows-1252
- There were whitespace issues in the dc.identifier.citation field (spaces at the beginning and end, and multiple spaces in between some words)
- Also, the authors listed in the citation need to be in normal format, separated by commas or colons (however you prefer), not with ||
- There were spaces in the beginning and end of some cg.identifier.doi fields
- Make sure that the cg.coverage.countries field is just countries: ie, no "SOUTH ETHIOPIA" or "EAST AFRICA" (the first should just be ETHIOPIA, the second should be in cg.coverage.region instead)
- The current list of regions we use is here: https://github.com/ilri/DSpace/blob/5_x-prod/dspace/config/input-forms.xml#L5162
- You have a syntax error in your cg.coverage.regions (extra ||)
- The value of dc.identifier.issn should just be the ISSN but you have: eISSN: 1479-487X
- I wrote a quick Python script to use the DSpace REST API to find all collections under a given community
- The source code is here: [rest-find-collections.py](https://gist.github.com/alanorth/ddd7f555f0e487fe0e9d3eb4ff26ce50)
- Peter had said that found a bunch of ILRI collections that were called "untitled", but I don't see any:
- I see I can monitor the number of Tomcat threads and some detailed JVM memory stuff if I install `munin-plugins-java`
- I'd still like to get arbitrary mbeans like activeSessions etc, though
- I can't remember if I had to configure the jmx settings in `/etc/munin/plugin-conf.d/munin-node` or not—I think all I did was re-run the `munin-node-configure` script and of course enable JMX in Tomcat's JVM options
- Thinking about generating a jmeter test plan for DSpace, along the lines of [Georgetown's dspace-performance-test](https://github.com/Georgetown-University-Libraries/dspace-performance-test)
- I got a list of all the GET requests on CGSpace for January 21st (the last time Linode complained the load was high), excluding admin calls:
- Atmire responded to [my issue from two weeks ago](https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560) and said they will start looking into DSpace 5.8 compatibility for CGSpace
- I set up a new Arch Linux Linode instance with 8192 MB of RAM and ran the test plan a few times to get a baseline:
```
# lscpu
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
- Run another round of tests on DSpace Test with jmeter after changing Tomcat's `minSpareThreads` to 20 (default is 10) and `acceptorThreadCount` to 2 (default is 1):
- Peter followed up about some of the points from the Skype meeting last week
- Regarding the ORCID field issue, I see [ICARDA's MELSpace is using `cg.creator.ID`](http://repo.mel.cgiar.org/handle/20.500.11766/7668?show=full): 0000-0001-9156-7691
- I had floated the idea of using a controlled vocabulary with values formatted something like: Orth, Alan S. (0000-0002-1735-7458)
- Update PostgreSQL JDBC driver version from 42.1.4 to 42.2.1 on DSpace Test, see: https://jdbc.postgresql.org/
- Reboot DSpace Test to get new Linode kernel (Linux 4.14.14-x86_64-linode94)
- I am testing my old work on the `dc.rights` field, I had added a branch for it a few months ago
- I added a list of Creative Commons and other licenses in `input-forms.xml`
- The problem is that Peter wanted to use two questions, one for CG centers and one for other, but using the same metadata value, which isn't possible (?)
- So I used some creativity and made several fields display values, but not store any, ie:
```
<pair>
<displayed-value>For products published by another party:</displayed-value>
<stored-value></stored-value>
</pair>
```
- I was worried that if a user selected this field for some reason that DSpace would store an empty value, but it simply doesn't register that as a valid option: