CGSpace Notes

Documenting day-to-day work on the CGSpace repository.

February, 2019

2019-02-01

  • Linode has alerted a few times since last night that the CPU usage on CGSpace (linode18) was high despite me increasing the alert threshold last week from 250% to 275%—I might need to increase it again!
  • The top IPs before, during, and after this latest alert tonight were:
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "01/Feb/2019:(17|18|19|20|21)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
    245 207.46.13.5
    332 54.70.40.11
    385 5.143.231.38
    405 207.46.13.173
    405 207.46.13.75
   1117 66.249.66.219
   1121 35.237.175.180
   1546 5.9.6.51
   2474 45.5.186.2
   5490 85.25.237.71
  • 85.25.237.71 is the “Linguee Bot” that I first saw last month
  • The Solr statistics the past few months have been very high and I was wondering if the web server logs also showed an increase
  • There were just over 3 million accesses in the nginx logs last month:
# time zcat --force /var/log/nginx/* | grep -cE "[0-9]{1,2}/Jan/2019"
3018243

real    0m19.873s
user    0m22.203s
sys     0m1.979s

2019-02-02

  • Another alert from Linode about CGSpace (linode18) this morning, here are the top IPs in the web server logs before, during, and after that time:
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "02/Feb/2019:0(1|2|3|4|5)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
    284 18.195.78.144
    329 207.46.13.32
    417 35.237.175.180
    448 34.218.226.147
    694 2a01:4f8:13b:1296::2
    718 2a01:4f8:140:3192::2
    786 137.108.70.14
   1002 5.9.6.51
   6077 85.25.237.71
   8726 45.5.184.2
  • 45.5.184.2 is CIAT and 85.25.237.71 is the new Linguee bot that I first noticed a few days ago
  • I will increase the Linode alert threshold from 275 to 300% because this is becoming too much!
  • I tested the Atmire Metadata Quality Module (MQM)’s duplicate checked on the some WLE items that I helped Udana with a few months ago on DSpace Test (linode19) and indeed it found many duplicates!

2019-02-03

  • This is seriously getting annoying, Linode sent another alert this morning that CGSpace (linode18) load was 377%!
  • Here are the top IPs before, during, and after that time:
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "03/Feb/2019:0(5|6|7|8|9)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
    325 85.25.237.71
    340 45.5.184.72
    431 5.143.231.8
    756 5.9.6.51
   1048 34.218.226.147
   1203 66.249.66.219
   1496 195.201.104.240
   4658 205.186.128.185
   4658 70.32.83.92
   4852 45.5.184.2
  • 45.5.184.2 is CIAT, 70.32.83.92 and 205.186.128.185 are Macaroni Bros harvesters for CCAFS I think
  • 195.201.104.240 is a new IP address in Germany with the following user agent:
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:62.0) Gecko/20100101 Firefox/62.0
  • This user was making 20–60 requests per minute this morning… seems like I should try to block this type of behavior heuristically, regardless of user agent!
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "03/Feb/2019" | grep 195.201.104.240 | grep -o -E '03/Feb/2019:0[0-9]:[0-9][0-9]' | uniq -c | sort -n | tail -n 20
     19 03/Feb/2019:07:42
     20 03/Feb/2019:07:12
     21 03/Feb/2019:07:27
     21 03/Feb/2019:07:28
     25 03/Feb/2019:07:23
     25 03/Feb/2019:07:29
     26 03/Feb/2019:07:33
     28 03/Feb/2019:07:38
     30 03/Feb/2019:07:31
     33 03/Feb/2019:07:35
     33 03/Feb/2019:07:37
     38 03/Feb/2019:07:40
     43 03/Feb/2019:07:24
     43 03/Feb/2019:07:32
     46 03/Feb/2019:07:36
     47 03/Feb/2019:07:34
     47 03/Feb/2019:07:39
     47 03/Feb/2019:07:41
     51 03/Feb/2019:07:26
     59 03/Feb/2019:07:25
  • At least they re-used their Tomcat session!
$ grep -o -E 'session_id=[A-Z0-9]{32}:ip_addr=195.201.104.240' dspace.log.2019-02-03 | sort | uniq | wc -l
1
  • This user was making requests to /browse, which is not currently under the existing rate limiting of dynamic pages in our nginx config
  • Run all system updates on linode20 and reboot it
    • This will be the new AReS repository explorer server soon

2019-02-04

  • Generate a list of CTA subjects from CGSpace for Peter:
dspace=# \copy (SELECT DISTINCT text_value, count(*) FROM metadatavalue WHERE resource_type_id=2 AND metadata_field_id=124 GROUP BY text_value ORDER BY COUNT DESC) to /tmp/cta-subjects.csv with csv header;
COPY 321
  • Skype with Michael Victor about CKM and CGSpace
  • Discuss the new IITA research theme field with Abenet and decide that we should use cg.identifier.iitatheme
  • This morning there was another alert from Linode about the high load on CGSpace (linode18), here are the top IPs in the web server logs before, during, and after that time:
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "04/Feb/2019:0(5|6|7|8|9)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
    589 2a01:4f8:140:3192::2
    762 66.249.66.219
    889 35.237.175.180
   1332 34.218.226.147
   1393 5.9.6.51
   1940 50.116.102.77
   3578 85.25.237.71
   4311 45.5.184.2
   4658 205.186.128.185
   4658 70.32.83.92
  • At this rate I think I just need to stop paying attention to these alerts—DSpace gets thrashed when people use the APIs properly and there’s nothing we can do to improve REST API performance!
  • Perhaps I just need to keep increasing the Linode alert threshold (currently 300%) for this host?

2019-02-05

  • Peter sent me corrections and deletions for the CTA subjects and as usual, there were encoding errors with some accentsÁ in his file
  • In other news, it seems that the GREL syntax regarding booleans changed in OpenRefine recently, so I need to update some expressions like the one I use to detect encoding errors to use toString():
or(
  isNotNull(value.match(/.*\uFFFD.*/)),
  isNotNull(value.match(/.*\u00A0.*/)),
  isNotNull(value.match(/.*\u200A.*/)),
  isNotNull(value.match(/.*\u2019.*/)),
  isNotNull(value.match(/.*\u00b4.*/)),
  isNotNull(value.match(/.*\u007e.*/))
).toString()
$ ./fix-metadata-values.py -i 2019-02-04-Correct-65-CTA-Subjects.csv -f cg.subject.cta -t CORRECT -m 124 -db dspace -u dspace -p 'fuu' -d
$ ./delete-metadata-values.py -i 2019-02-04-Delete-16-CTA-Subjects.csv -f cg.subject.cta -m 124 -db dspace -u dspace -p 'fuu' -d
  • I applied them on DSpace Test and CGSpace and started a full Discovery re-index:
$ export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xmx1024m"
$ time schedtool -D -e ionice -c2 -n7 nice -n19 dspace index-discovery -b
  • Peter had marked several terms with || to indicate multiple values in his corrections so I will have to go back and do those manually:
EMPODERAMENTO DE JOVENS,EMPODERAMENTO||JOVENS
ENVIRONMENTAL PROTECTION AND NATURAL RESOURCES MANAGEMENT,NATURAL RESOURCES MANAGEMENT||ENVIRONMENT
FISHERIES AND AQUACULTURE,FISHERIES||AQUACULTURE
MARKETING AND TRADE,MARKETING||TRADE
MARKETING ET COMMERCE,MARKETING||COMMERCE
NATURAL RESOURCES AND ENVIRONMENT,NATURAL RESOURCES MANAGEMENT||ENVIRONMENT
PÊCHES ET AQUACULTURE,PÊCHES||AQUACULTURE
PESCAS E AQUACULTURE,PISCICULTURA||AQUACULTURE

2019-02-06

  • I dumped the CTA community so I can try to fix the subjects with multiple subjects that Peter indicated in his corrections:
$ dspace metadata-export -i 10568/42211 -f /tmp/cta.csv
  • Then I used csvcut to get only the CTA subject columns:
$ csvcut -c "id,collection,cg.subject.cta,cg.subject.cta[],cg.subject.cta[en_US]" /tmp/cta.csv > /tmp/cta-subjects.csv
  • After that I imported the CSV into OpenRefine where I could properly identify and edit the subjects as multiple values
  • Then I imported it back into CGSpace:
$ dspace metadata-import -f /tmp/2019-02-06-CTA-multiple-subjects.csv
  • Another day, another alert about high load on CGSpace (linode18) from Linode
  • This time the load average was 370% and the top ten IPs before, during, and after that time were:
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "06/Feb/2019:0(5|6|7|8|9)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
    689 35.237.175.180
   1236 5.9.6.51
   1305 34.218.226.147
   1580 66.249.66.219
   1939 50.116.102.77
   2313 108.212.105.35
   4666 205.186.128.185
   4666 70.32.83.92
   4950 85.25.237.71
   5158 45.5.186.2
  • Looking closer at the top users, I see 45.5.186.2 is in Brazil and was making over 100 requests per minute to the REST API:
# zcat --force /var/log/nginx/rest.log /var/log/nginx/rest.log.1 | grep 45.5.186.2 | grep -o -E '06/Feb/2019:0[0-9]:[0-9][0-9]' | uniq -c | sort -n | tail -n 10
    118 06/Feb/2019:05:46
    119 06/Feb/2019:05:37
    119 06/Feb/2019:05:47
    120 06/Feb/2019:05:43
    120 06/Feb/2019:05:44
    121 06/Feb/2019:05:38
    122 06/Feb/2019:05:39
    125 06/Feb/2019:05:42
    126 06/Feb/2019:05:40
    126 06/Feb/2019:05:41
  • I was thinking of rate limiting those because I assumed most of them would be errors, but actually most are HTTP 200 OK!
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E '06/Feb/2019' | grep 45.5.186.2 | awk '{print $9}' | sort | uniq -c
  10411 200
      1 301
      7 302
      3 404
     18 499
      2 500
  • I should probably start looking at the top IPs for web (XMLUI) and for API (REST and OAI) separately:
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E "06/Feb/2019:0(5|6|7|8|9)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
    328 220.247.212.35
    372 66.249.66.221
    380 207.46.13.2
    519 2a01:4f8:140:3192::2
    572 5.143.231.8
    689 35.237.175.180
    771 108.212.105.35
   1236 5.9.6.51
   1554 66.249.66.219
   4942 85.25.237.71
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E "06/Feb/2019:0(5|6|7|8|9)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
     10 66.249.66.221
     26 66.249.66.219
     69 5.143.231.8
    340 45.5.184.72
   1040 34.218.226.147
   1542 108.212.105.35
   1937 50.116.102.77
   4661 205.186.128.185
   4661 70.32.83.92
   5102 45.5.186.2

2019-02-07

  • Linode sent an alert last night that the load on CGSpace (linode18) was over 300%
  • Here are the top IPs in the web server and API logs before, during, and after that time, respectively:
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E "06/Feb/2019:(17|18|19|20|23)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
      5 66.249.66.209
      6 2a01:4f8:210:51ef::2
      6 40.77.167.75
      9 104.198.9.108
      9 157.55.39.192
     10 157.55.39.244
     12 66.249.66.221
     20 95.108.181.88
     27 66.249.66.219
   2381 45.5.186.2
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E "06/Feb/2019:(17|18|19|20|23)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
    455 45.5.186.2
    506 40.77.167.75
    559 54.70.40.11
    825 157.55.39.244
    871 2a01:4f8:140:3192::2
    938 157.55.39.192
   1058 85.25.237.71
   1416 5.9.6.51
   1606 66.249.66.219
   1718 35.237.175.180
  • Then again this morning another alert:
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E "07/Feb/2019:0(5|6|7|8|9)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
      5 66.249.66.223
      8 104.198.9.108
     13 110.54.160.222
     24 66.249.66.219
     25 175.158.217.98
    214 34.218.226.147
    346 45.5.184.72
   4529 45.5.186.2
   4661 205.186.128.185
   4661 70.32.83.92
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E "07/Feb/2019:0(5|6|7|8|9)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
    145 157.55.39.237
    154 66.249.66.221
    214 34.218.226.147
    261 35.237.175.180
    273 2a01:4f8:140:3192::2
    300 169.48.66.92
    487 5.143.231.39
    766 5.9.6.51
    771 85.25.237.71
    848 66.249.66.219
  • So it seems that the load issue comes from the REST API, not the XMLUI
  • I could probably rate limit the REST API, or maybe just keep increasing the alert threshold so I don’t get alert spam (this is probably the correct approach because it seems like the REST API can keep up with the requests and is returning HTTP 200 status as far as I can tell)
  • Bosede from IITA sent a message that a colleague is having problems submitting to some collections in their community:
Authorization denied for action WORKFLOW_STEP_1 on COLLECTION:1056 by user 1759

IITA Posters and Presentations workflow step 1 empty

  • IITA editors or approvers should be added to that step (though I’m curious why nobody is in that group currently)
  • Abenet says we are not using the “Accept/Reject” step so this group should be deleted
  • Bizuwork asked about the “DSpace Submission Approved and Archived” emails that stopped working last month
  • I tried the test-email command on DSpace and it indeed is not working:
$ dspace test-email

About to send test email:
 - To: aorth@mjanja.ch
 - Subject: DSpace test email
 - Server: smtp.serv.cgnet.com

Error sending email:
 - Error: javax.mail.MessagingException: Could not connect to SMTP host: smtp.serv.cgnet.com, port: 25;
  nested exception is:
        java.net.ConnectException: Connection refused (Connection refused)

Please see the DSpace documentation for assistance.
  • I can’t connect to TCP port 25 on that server so I sent a mail to CGNET support to ask what’s up
  • CGNET said these servers were discontinued in 2018-01 and that I should use Office 365

2019-02-08

  • I re-configured CGSpace to use the email/password for cgspace-support, but I get this error when I try the test-email script:
Error sending email:
 - Error: com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.57 SMTP; Client was not authenticated to send anonymous mail during MAIL FROM [AM6PR10CA0028.EURPRD10.PROD.OUTLOOK.COM]
  • I tried to log into Outlook 365 with the credentials but I think the ones I have must be wrong, so I will ask ICT to reset the password

2019-02-09

  • Linode sent alerts about CPU load yesterday morning, yesterday night, and this morning! All over 300% CPU load!
  • This is just for this morning:
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E "09/Feb/2019:(07|08|09|10|11)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
    289 35.237.175.180
    290 66.249.66.221
    296 18.195.78.144
    312 207.46.13.201
    393 207.46.13.64
    526 2a01:4f8:140:3192::2
    580 151.80.203.180
    742 5.143.231.38
   1046 5.9.6.51
   1331 66.249.66.219
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E "09/Feb/2019:(07|08|09|10|11)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
      4 66.249.83.30
      5 49.149.10.16
      8 207.46.13.64
      9 207.46.13.201
     11 105.63.86.154
     11 66.249.66.221
     31 66.249.66.219
    297 2001:41d0:d:1990::
    908 34.218.226.147
   1947 50.116.102.77
  • I know 66.249.66.219 is Google, 5.9.6.51 is MegaIndex, and 5.143.231.38 is SputnikBot
  • Ooh, but 151.80.203.180 is some malicious bot making requests for /etc/passwd like this:
/bitstream/handle/10568/68981/Identifying%20benefit%20flows%20studies%20on%20the%20potential%20monetary%20and%20non%20monetary%20benefits%20arising%20from%20the%20International%20Treaty%20on%20Plant%20Genetic_1671.pdf?sequence=1&isAllowed=../etc/passwd
  • 151.80.203.180 is on OVH so I sent a message to their abuse email…

2019-02-10

  • Linode sent another alert about CGSpace (linode18) CPU load this morning, here are the top IPs in the web server XMLUI and API logs before, during, and after that time:
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E "10/Feb/2019:0(5|6|7|8|9)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
    232 18.195.78.144
    238 35.237.175.180
    281 66.249.66.221
    314 151.80.203.180
    319 34.218.226.147
    326 40.77.167.178
    352 157.55.39.149
    444 2a01:4f8:140:3192::2
   1171 5.9.6.51
   1196 66.249.66.219
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E "10/Feb/2019:0(5|6|7|8|9)" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
      6 112.203.241.69
      7 157.55.39.149
      9 40.77.167.178
     15 66.249.66.219
    368 45.5.184.72
    432 50.116.102.77
    971 34.218.226.147
   4403 45.5.186.2
   4668 205.186.128.185
   4668 70.32.83.92
  • Another interesting thing might be the total number of requests for web and API services during that time:
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -cE "10/Feb/2019:0(5|6|7|8|9)"
16333
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -cE "10/Feb/2019:0(5|6|7|8|9)"
15964
  • Also, the number of unique IPs served during that time:
# zcat --force /var/log/nginx/{access,error,library-access}.log /var/log/nginx/{access,error,library-access}.log.1 | grep -E "10/Feb/2019:0(5|6|7|8|9)" | awk '{print $1}' | sort | uniq | wc -l
1622
# zcat --force /var/log/nginx/{oai,rest,statistics}.log /var/log/nginx/{oai,rest,statistics}.log.1 | grep -E "10/Feb/2019:0(5|6|7|8|9)" | awk '{print $1}' | sort | uniq | wc -l
95
  • It’s very clear to me now that the API requests are the heaviest!
  • I think I need to increase the Linode alert threshold from 300 to 350% now so I stop getting some of these alerts—it’s becoming a bit of the boy who cried wolf because it alerts like clockwork twice per day!
  • Add my Python- and shell-based metadata workflow helper scripts as well as the environment settings for pipenv to our DSpace repository (#408) so I can track changes and distribute them more formally instead of just keeping them collected on the wiki
  • Started adding IITA research theme (cg.identifier.iitatheme) to CGSpace
    • I’m still waiting for feedback from IITA whether they actually want to use “SOCIAL SCIENCE & AGRIC BUSINESS” because it is listed as “Social Science and Agribusiness” on their website
    • Also, I think they want to do some mappings of items with existing subjects to these new themes
  • Update ILRI author name style in the controlled vocabulary (Domelevo Entfellner, Jean-Baka) (#409)
    • I’m still waiting to hear from Bizuwork whether we’ll batch update all existing items with the old name style
    • No, there is only one entry and Bizu already fixed it
  • Last week Hector Tobon from CCAFS asked me about the Creative Commons 3.0 Intergovernmental Organizations (IGO) license because it is not in the list of SPDX licenses
  • Testing the mail.server.disabled property that I noticed in dspace.cfg recently
    • Setting it to true results in the following message when I try the dspace test-email helper on DSpace Test:
Error sending email:
 - Error: cannot test email because mail.server.disabled is set to true
  • I’m not sure why I didn’t know about this configuration option before, and always maintained multiple configurations for development and production
  • I updated my local Sonatype nexus Docker image and had an issue with the volume for some reason so I decided to just start from scratch:
# docker rm nexus
# docker pull sonatype/nexus3
# mkdir -p /home/aorth/.local/lib/containers/volumes/nexus_data
# chown 200:200 /home/aorth/.local/lib/containers/volumes/nexus_data
# docker run --name nexus --network dspace-build -d -v /home/aorth/.local/lib/containers/volumes/nexus_data:/nexus-data -p 8081:8081 sonatype/nexus3
# docker pull docker.bintray.io/jfrog/artifactory-oss:latest
# mkdir -p /home/aorth/.local/lib/containers/volumes/artifactory5_data
# chown 1030 /home/aorth/.local/lib/containers/volumes/artifactory5_data
# docker run --name artifactory --network dspace-build -d -v /home/aorth/.local/lib/containers/volumes/artifactory5_data:/var/opt/jfrog/artifactory -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss

2019-02-11

  • Bosede from IITA said we can use “SOCIAL SCIENCE & AGRIBUSINESS” in their new IITA theme field to be consistent with other places they are using it
  • Run all system updates on DSpace Test (linode19) and reboot it

2019-02-12

$ vipsthumbnail alc_contrastes_desafios.pdf -s 300 -o '%s.jpg[Q=92,optimize_coding,strip]'
  • (DSpace 5 appears to use JPEG 92 quality so I do the same)
  • Thinking about making “top items” endpoints in my dspace-statistics-api
  • I could use the following SQL queries very easily to get the top items by views or downloads:
dspacestatistics=# SELECT * FROM items WHERE views > 0 ORDER BY views DESC LIMIT 10;
dspacestatistics=# SELECT * FROM items WHERE downloads > 0 ORDER BY downloads DESC LIMIT 10;
  • I’d have to think about what to make the REST API endpoints, perhaps: /statistics/top/items?limit=10
  • But how do I do top items by views / downloads separately?
  • I re-deployed DSpace 6.3 locally to test the PDFBox thumbnails, especially to see if they handle CMYK files properly
    • The quality is JPEG 75 and I don’t see a way to set the thumbnail dimensions, but the resulting image is indeed sRGB:
$ identify -verbose alc_contrastes_desafios.pdf.jpg
...
  Colorspace: sRGB
  • I will read the PDFBox thumbnailer documentation to see if I can change the size and quality

2019-02-13

  • ILRI ICT reset the password for the CGSpace mail account, but I still can’t get it to send mail from DSpace’s test-email utility
  • I even added extra mail properties to dspace.cfg as suggested by someone on the dspace-tech mailing list:
mail.extraproperties = mail.smtp.starttls.required = true, mail.smtp.auth=true
  • But the result is still:
Error sending email:
 - Error: com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.57 SMTP; Client was not authenticated to send anonymous mail during MAIL FROM [AM6PR06CA0001.eurprd06.prod.outlook.com]
  • I tried to log into the Outlook 365 web mail and it doesn’t work so I’ve emailed ILRI ICT again
  • After reading the common mistakes in the JavaMail FAQ I reconfigured the extra properties in DSpace’s mail configuration to be simply:
mail.extraproperties = mail.smtp.starttls.enable=true
  • … and then I was able to send a mail using my personal account where I know the credentials work
  • The CGSpace account still gets this error message:
Error sending email:
 - Error: javax.mail.AuthenticationFailedException
$ dspace user --delete --email blah@cta.int
$ dspace user --add --givenname Thierry --surname Lewyllie --email blah@cta.int --password 'blah'
  • On this note, I saw a thread on the dspace-tech mailing list that says this functionality exists if you enable webui.user.assumelogin = true
  • I will enable this on CGSpace (#411)