- Linode has alerted a few times since last night that the CPU usage on CGSpace (linode18) was high despite me increasing the alert threshold last week from 250% to 275%—I might need to increase it again!
- The Solr statistics the past few months have been very high and I was wondering if the web server logs also showed an increase
- There were just over 3 million accesses in the nginx logs last month:
```
# time zcat --force /var/log/nginx/* | grep -cE "[0-9]{1,2}/Jan/2019"
3018243
real 0m19.873s
user 0m22.203s
sys 0m1.979s
```
<!--more-->
- Normally I'd say this was very high, but [about this time last year]({{< relref "2018-02.md" >}}) I remember thinking the same thing when we had 3.1 million...
- I will have to keep an eye on this to see if there is some error in Solr...
- Atmire sent their [pull request to re-enable the Metadata Quality Module (MQM) on our `5_x-dev` branch](https://github.com/ilri/DSpace/pull/407) today
-`45.5.184.2` is CIAT and `85.25.237.71` is the new Linguee bot that I first noticed a few days ago
- I will increase the Linode alert threshold from 275 to 300% because this is becoming too much!
- I tested the Atmire Metadata Quality Module (MQM)'s duplicate checked on the some [WLE items](https://dspacetest.cgiar.org/handle/10568/81268) that I helped Udana with a few months ago on DSpace Test (linode19) and indeed it found many duplicates!
-`45.5.184.2` is CIAT, `70.32.83.92` and `205.186.128.185` are Macaroni Bros harvesters for CCAFS I think
-`195.201.104.240` is a new IP address in Germany with the following user agent:
```
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:62.0) Gecko/20100101 Firefox/62.0
```
- This user was making 20–60 requests per minute this morning... seems like I should try to block this type of behavior heuristically, regardless of user agent!
- This user was making requests to `/browse`, which is not currently under the existing rate limiting of dynamic pages in our nginx config
- I [extended the existing `dynamicpages` (12/m) rate limit to `/browse` and `/discover`](https://github.com/ilri/rmg-ansible-public/commit/36dfb072d6724fb5cdc81ef79cab08ed9ce427ad) with an allowance for bursting of up to five requests for "real" users
- Generate a list of CTA subjects from CGSpace for Peter:
```
dspace=# \copy (SELECT DISTINCT text_value, count(*) FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id=124 GROUP BY text_value ORDER BY COUNT DESC) to /tmp/cta-subjects.csv with csv header;
- This morning there was another alert from Linode about the high load on CGSpace (linode18), here are the top IPs in the web server logs before, during, and after that time:
- At this rate I think I just need to stop paying attention to these alerts—DSpace gets thrashed when people use the APIs properly and there's nothing we can do to improve REST API performance!
- Perhaps I just need to keep increasing the Linode alert threshold (currently 300%) for this host?
- Peter sent me corrections and deletions for the CTA subjects and as usual, there were encoding errors with some accentsÁ in his file
- In other news, it seems that the GREL syntax regarding booleans changed in OpenRefine recently, so I need to update some expressions like the one I use to detect encoding errors to use `toString()`:
```
or(
isNotNull(value.match(/.*\uFFFD.*/)),
isNotNull(value.match(/.*\u00A0.*/)),
isNotNull(value.match(/.*\u200A.*/)),
isNotNull(value.match(/.*\u2019.*/)),
isNotNull(value.match(/.*\u00b4.*/)),
isNotNull(value.match(/.*\u007e.*/))
).toString()
```
- Testing the corrections for sixty-five items and sixteen deletions using my [fix-metadata-values.py](https://gist.github.com/alanorth/df92cbfb54d762ba21b28f7cd83b6897) and [delete-metadata-values.py](https://gist.github.com/alanorth/bd7d58c947f686401a2b1fadc78736be) scripts:
- So it seems that the load issue comes from the REST API, not the XMLUI
- I could probably rate limit the REST API, or maybe just keep increasing the alert threshold so I don't get alert spam (this is probably the correct approach because it seems like the REST API can keep up with the requests and is returning HTTP 200 status as far as I can tell)
- Bosede from IITA sent a message that a colleague is having problems submitting to some collections in their community:
```
Authorization denied for action WORKFLOW_STEP_1 on COLLECTION:1056 by user 1759
```
- Collection 1056 appears to be [IITA Posters and Presentations](https://cgspace.cgiar.org/handle/10568/68741) and I see that its workflow step 1 (Accept/Reject) is empty:
![IITA Posters and Presentations workflow step 1 empty](/cgspace-notes/2019/02/iita-workflow-step1-empty.png)
- IITA editors or approvers should be added to that step (though I'm curious why nobody is in that group currently)
- CGNET said these servers were discontinued in 2018-01 and that I should use [Office 365](https://docs.microsoft.com/en-us/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-office-3)
## 2019-02-08
- I re-configured CGSpace to use the email/password for cgspace-support, but I get this error when I try the `test-email` script:
```
Error sending email:
- Error: com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.57 SMTP; Client was not authenticated to send anonymous mail during MAIL FROM [AM6PR10CA0028.EURPRD10.PROD.OUTLOOK.COM]
```
- I tried to log into Outlook 365 with the credentials but I think the ones I have must be wrong, so I will ask ICT to reset the password
- Linode sent another alert about CGSpace (linode18) CPU load this morning, here are the top IPs in the web server XMLUI and API logs before, during, and after that time:
- I think I need to increase the Linode alert threshold from 300 to 350% now so I stop getting some of these alerts—it's becoming a bit of *the boy who cried wolf* because it alerts like clockwork twice per day!
- Add my Python- and shell-based metadata workflow helper scripts as well as the environment settings for pipenv to our DSpace repository ([#408](https://github.com/ilri/DSpace/pull/408)) so I can track changes and distribute them more formally instead of just keeping them [collected on the wiki](https://github.com/ilri/DSpace/wiki/Scripts)
- Started adding IITA research theme (`cg.identifier.iitatheme`) to CGSpace
- I'm still waiting for feedback from IITA whether they actually want to use "SOCIAL SCIENCE & AGRIC BUSINESS" because it is listed as ["Social Science and Agribusiness"](http://www.iita.org/project-discipline/social-science-and-agribusiness/) on their website
- Also, I think they want to do some mappings of items with existing subjects to these new themes
- Update ILRI author name style in the controlled vocabulary (Domelevo Entfellner, Jean-Baka) ([#409](https://github.com/ilri/DSpace/pull/409))
- I'm still waiting to hear from Bizuwork whether we'll batch update all existing items with the old name style
- Last week Hector Tobon from CCAFS asked me about the Creative Commons 3.0 Intergovernmental Organizations (IGO) license because it is not in the list of SPDX licenses
- Today I made [a request](http://13.57.134.254/app/license_requests/15/) to the [SPDX using their web form](https://github.com/spdx/license-list-XML/blob/master/CONTRIBUTING.md) to include this class of Creative Commons licenses](https://wiki.creativecommons.org/wiki/Intergovernmental_Organizations)
- Testing the `mail.server.disabled` property that I noticed in `dspace.cfg` recently
- Setting it to true results in the following message when I try the `dspace test-email` helper on DSpace Test:
```
Error sending email:
- Error: cannot test email because mail.server.disabled is set to true
```
- I'm not sure why I didn't know about this configuration option before, and always maintained multiple configurations for development and production
- I will modify the [Ansible DSpace role](https://github.com/ilri/rmg-ansible-public) to use this in its `build.properties` template
- I updated my local Sonatype nexus Docker image and had an issue with the volume for some reason so I decided to just start from scratch:
- For some reason my `mvn package` for DSpace is not working now... I might go back to [using Artifactory for caching](https://mjanja.ch/2018/02/cache-maven-artifacts-with-artifactory/) instead:
- I notice that [DSpace 6 has included a new JAR-based PDF thumbnailer based on PDFBox](https://jira.duraspace.org/browse/DS-3052), I wonder how good its thumbnails are and how it handles CMYK PDFs
- On a similar note, I wonder if we could use the performance-focused [libvps](https://libvips.github.io/libvips/) and the third-party [jlibvips Java library](https://github.com/codecitizen/jlibvips/) in DSpace
- Testing the `vipsthumbnail` command line tool with [this CGSpace item that uses CMYK](https://cgspace.cgiar.org/handle/10568/51999):
- Error: com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.57 SMTP; Client was not authenticated to send anonymous mail during MAIL FROM [AM6PR06CA0001.eurprd06.prod.outlook.com]
```
- I tried to log into the Outlook 365 web mail and it doesn't work so I've emailed ILRI ICT again
- After reading the [common mistakes in the JavaMail FAQ](https://javaee.github.io/javamail/FAQ#commonmistakes) I reconfigured the extra properties in DSpace's mail configuration to be simply:
- ... and then I was able to send a mail using my personal account where I know the credentials work
- The CGSpace account still gets this error message:
```
Error sending email:
- Error: javax.mail.AuthenticationFailedException
```
- I updated the [DSpace SMTP settings in `dspace.cfg`](https://github.com/ilri/DSpace/pull/410) as well as the [variables in the DSpace role of the Ansible infrastructure scripts](https://github.com/ilri/rmg-ansible-public/commit/ab5fe4d10e16413cd04ffb1bc3179dc970d6d47c)
- Thierry from CTA is having issues with his account on DSpace Test, and there is no admin password reset function on DSpace (only via email, which is disabled on DSpace Test), so I have to delete and re-create his account:
- Test re-creating my local PostgreSQL and Artifactory containers with podman instead of Docker (using the volumes from my old Docker containers though):
- Which totally works, but Podman's rootless support doesn't work with port mappings yet...
- Deploy the Tomcat-7-from-tarball branch on CGSpace (linode18), but first stop the Ubuntu Tomcat 7 and do some basic prep before running the Ansible playbook:
```
# systemctl stop tomcat7
# apt remove tomcat7 tomcat7-admin
# useradd -m -r -s /bin/bash dspace
# mv /usr/share/tomcat7/.m2 /home/dspace
# mv /usr/share/tomcat7/src /home/dspace
# chown -R dspace:dspace /home/dspace
# chown -R dspace:dspace /home/cgspace.cgiar.org
# dpkg -P tomcat7-admin tomcat7-common
```
- After running the playbook CGSpace came back up, but I had an issue with some Solr cores not being loaded (similar to last month) and this was in the Solr log: