mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2024-12-21 12:42:18 +01:00
539 lines
25 KiB
Markdown
539 lines
25 KiB
Markdown
---
|
|
title: "April, 2018"
|
|
date: 2018-04-01T16:13:54+02:00
|
|
author: "Alan Orth"
|
|
categories: ["Notes"]
|
|
---
|
|
|
|
## 2018-04-01
|
|
|
|
- I tried to test something on DSpace Test but noticed that it's down since god knows when
|
|
- Catalina logs at least show some memory errors yesterday:
|
|
|
|
<!--more-->
|
|
|
|
```
|
|
Mar 31, 2018 10:26:42 PM org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor run
|
|
SEVERE: Unexpected death of background thread ContainerBackgroundProcessor[StandardEngine[Catalina]]
|
|
java.lang.OutOfMemoryError: Java heap space
|
|
|
|
Exception in thread "ContainerBackgroundProcessor[StandardEngine[Catalina]]" java.lang.OutOfMemoryError: Java heap space
|
|
```
|
|
|
|
- So this is getting super annoying
|
|
- I ran all system updates on DSpace Test and rebooted it
|
|
- For some reason Listings and Reports is not giving any results for any queries now...
|
|
- I posted a message on Yammer to ask if people are using the Duplicate Check step from the Metadata Quality Module
|
|
- Help Lili Szilagyi with a question about statistics on some CCAFS items
|
|
|
|
## 2018-04-04
|
|
|
|
- Peter noticed that there were still some old CRP names on CGSpace, because I hadn't forced the Discovery index to be updated after I fixed the others last week
|
|
- For completeness I re-ran the CRP corrections on CGSpace:
|
|
|
|
```
|
|
$ ./fix-metadata-values.py -i /tmp/Correct-21-CRPs-2018-03-16.csv -f cg.contributor.crp -t correct -m 230 -db dspace -u dspace -p 'fuuu'
|
|
Fixed 1 occurences of: AGRICULTURE FOR NUTRITION AND HEALTH
|
|
```
|
|
|
|
- Then started a full Discovery index:
|
|
|
|
```
|
|
$ export JAVA_OPTS='-Dfile.encoding=UTF-8 -Xmx1024m'
|
|
$ time schedtool -D -e ionice -c2 -n7 nice -n19 dspace index-discovery -b
|
|
|
|
real 76m13.841s
|
|
user 8m22.960s
|
|
sys 2m2.498s
|
|
```
|
|
|
|
- Elizabeth from CIAT emailed to ask if I could help her by adding ORCID identifiers to all of Joseph Tohme's items
|
|
- I used my [add-orcid-identifiers-csv.py](https://gist.githubusercontent.com/alanorth/a49d85cd9c5dea89cddbe809813a7050/raw/f67b6e45a9a940732882ae4bb26897a9b245ef31/add-orcid-identifiers-csv.py) script:
|
|
|
|
```
|
|
$ ./add-orcid-identifiers-csv.py -i /tmp/jtohme-2018-04-04.csv -db dspace -u dspace -p 'fuuu'
|
|
```
|
|
|
|
- The CSV format of `jtohme-2018-04-04.csv` was:
|
|
|
|
```csv
|
|
dc.contributor.author,cg.creator.id
|
|
"Tohme, Joseph M.",Joe Tohme: 0000-0003-2765-7101
|
|
```
|
|
|
|
- There was a quoting error in my CRP CSV and the replacements for `Forests, Trees and Agroforestry` got messed up
|
|
- So I fixed them and had to re-index again!
|
|
- I started preparing the git branch for the the DSpace 5.5→5.8 upgrade:
|
|
|
|
```
|
|
$ git checkout -b 5_x-dspace-5.8 5_x-prod
|
|
$ git reset --hard ilri/5_x-prod
|
|
$ git rebase -i dspace-5.8
|
|
```
|
|
|
|
- I was prepared to skip some commits that I had cherry picked from the upstream `dspace-5_x` branch when we did the DSpace 5.5 upgrade (see notes on 2016-10-19 and 2017-12-17):
|
|
- [DS-3246] Improve cleanup in recyclable components (upstream commit on dspace-5_x: 9f0f5940e7921765c6a22e85337331656b18a403)
|
|
- [DS-3250] applying patch provided by Atmire (upstream commit on dspace-5_x: c6fda557f731dbc200d7d58b8b61563f86fe6d06)
|
|
- bump up to latest minor pdfbox version (upstream commit on dspace-5_x: b5330b78153b2052ed3dc2fd65917ccdbfcc0439)
|
|
- DS-3583 Usage of correct Collection Array (#1731) (upstream commit on dspace-5_x: c8f62e6f496fa86846bfa6bcf2d16811087d9761)
|
|
- ... but somehow git knew, and didn't include them in my interactive rebase!
|
|
- I need to send this branch to Atmire and also arrange payment (see [ticket #560](https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560) in their tracker)
|
|
- Fix Sisay's SSH access to the new DSpace Test server (linode19)
|
|
|
|
## 2018-04-05
|
|
|
|
- Fix Sisay's sudo access on the new DSpace Test server (linode19)
|
|
- The reindexing process on DSpace Test took _forever_ yesterday:
|
|
|
|
```
|
|
$ time schedtool -D -e ionice -c2 -n7 nice -n19 dspace index-discovery -b
|
|
|
|
real 599m32.961s
|
|
user 9m3.947s
|
|
sys 2m52.585s
|
|
```
|
|
|
|
- So we really should not use this Linode block storage for Solr
|
|
- Assetstore might be fine but would complicate things with configuration and deployment (ughhh)
|
|
- Better to use Linode block storage only for backup
|
|
- Help Peter with the GDPR compliance / reporting form for CGSpace
|
|
- DSpace Test crashed due to memory issues again:
|
|
|
|
```
|
|
# grep -c 'java.lang.OutOfMemoryError: Java heap space' /var/log/tomcat7/catalina.out
|
|
16
|
|
```
|
|
|
|
- I ran all system updates on DSpace Test and rebooted it
|
|
- Proof some records on DSpace Test for Udana from IWMI
|
|
- He has done better with the small syntax and consistency issues but then there are larger concerns with not linking to DOIs, copying titles incorrectly, etc
|
|
|
|
## 2018-04-10
|
|
|
|
- I got a notice that CGSpace CPU usage was very high this morning
|
|
- Looking at the nginx logs, here are the top users today so far:
|
|
|
|
```
|
|
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "10/Apr/2018" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
|
|
282 207.46.13.112
|
|
286 54.175.208.220
|
|
287 207.46.13.113
|
|
298 66.249.66.153
|
|
322 207.46.13.114
|
|
780 104.196.152.243
|
|
3994 178.154.200.38
|
|
4295 70.32.83.92
|
|
4388 95.108.181.88
|
|
7653 45.5.186.2
|
|
```
|
|
|
|
- 45.5.186.2 is of course CIAT
|
|
- 95.108.181.88 appears to be Yandex:
|
|
|
|
```
|
|
95.108.181.88 - - [09/Apr/2018:06:34:16 +0000] "GET /bitstream/handle/10568/21794/ILRI_logo_usage.jpg.jpg HTTP/1.1" 200 2638 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)"
|
|
```
|
|
|
|
- And for some reason Yandex created a lot of Tomcat sessions today:
|
|
|
|
```
|
|
$ grep -c -E 'session_id=[A-Z0-9]{32}:ip_addr=95.108.181.88' dspace.log.2018-04-10
|
|
4363
|
|
```
|
|
|
|
- 70.32.83.92 appears to be some harvester we've seen before, but on a new IP
|
|
- They are not creating new Tomcat sessions so there is no problem there
|
|
- 178.154.200.38 also appears to be Yandex, and is also creating many Tomcat sessions:
|
|
|
|
```
|
|
$ grep -c -E 'session_id=[A-Z0-9]{32}:ip_addr=178.154.200.38' dspace.log.2018-04-10
|
|
3982
|
|
```
|
|
|
|
- I'm not sure why Yandex creates so many Tomcat sessions, as its user agent should match the Crawler Session Manager valve
|
|
- Let's try a manual request with and without their user agent:
|
|
|
|
```
|
|
$ http --print Hh https://cgspace.cgiar.org/bitstream/handle/10568/21794/ILRI_logo_usage.jpg.jpg 'User-Agent:Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)'
|
|
GET /bitstream/handle/10568/21794/ILRI_logo_usage.jpg.jpg HTTP/1.1
|
|
Accept: */*
|
|
Accept-Encoding: gzip, deflate
|
|
Connection: keep-alive
|
|
Host: cgspace.cgiar.org
|
|
User-Agent: Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)
|
|
|
|
HTTP/1.1 200 OK
|
|
Connection: keep-alive
|
|
Content-Language: en-US
|
|
Content-Length: 2638
|
|
Content-Type: image/jpeg;charset=ISO-8859-1
|
|
Date: Tue, 10 Apr 2018 05:18:37 GMT
|
|
Expires: Tue, 10 Apr 2018 06:18:37 GMT
|
|
Last-Modified: Tue, 25 Apr 2017 07:05:54 GMT
|
|
Server: nginx
|
|
Strict-Transport-Security: max-age=15768000
|
|
Vary: User-Agent
|
|
X-Cocoon-Version: 2.2.0
|
|
X-Content-Type-Options: nosniff
|
|
X-Frame-Options: SAMEORIGIN
|
|
X-XSS-Protection: 1; mode=block
|
|
|
|
$ http --print Hh https://cgspace.cgiar.org/bitstream/handle/10568/21794/ILRI_logo_usage.jpg.jpg
|
|
GET /bitstream/handle/10568/21794/ILRI_logo_usage.jpg.jpg HTTP/1.1
|
|
Accept: */*
|
|
Accept-Encoding: gzip, deflate
|
|
Connection: keep-alive
|
|
Host: cgspace.cgiar.org
|
|
User-Agent: HTTPie/0.9.9
|
|
|
|
HTTP/1.1 200 OK
|
|
Connection: keep-alive
|
|
Content-Language: en-US
|
|
Content-Length: 2638
|
|
Content-Type: image/jpeg;charset=ISO-8859-1
|
|
Date: Tue, 10 Apr 2018 05:20:08 GMT
|
|
Expires: Tue, 10 Apr 2018 06:20:08 GMT
|
|
Last-Modified: Tue, 25 Apr 2017 07:05:54 GMT
|
|
Server: nginx
|
|
Set-Cookie: JSESSIONID=31635DB42B66D6A4208CFCC96DD96875; Path=/; Secure; HttpOnly
|
|
Strict-Transport-Security: max-age=15768000
|
|
Vary: User-Agent
|
|
X-Cocoon-Version: 2.2.0
|
|
X-Content-Type-Options: nosniff
|
|
X-Frame-Options: SAMEORIGIN
|
|
X-XSS-Protection: 1; mode=block
|
|
```
|
|
|
|
- So it definitely looks like Yandex requests are getting assigned a session from the Crawler Session Manager valve
|
|
- And if I look at the DSpace log I see its IP sharing a session with other crawlers like Google (66.249.66.153)
|
|
- Indeed the number of Tomcat sessions appears to be normal:
|
|
|
|
![Tomcat sessions week](/cgspace-notes/2018/04/jmx_dspace_sessions-week.png)
|
|
|
|
- In other news, it looks like the number of total requests processed by nginx in March went down from the previous months:
|
|
|
|
```
|
|
# time zcat --force /var/log/nginx/* | grep -cE "[0-9]{1,2}/Mar/2018"
|
|
2266594
|
|
|
|
real 0m13.658s
|
|
user 0m16.533s
|
|
sys 0m1.087s
|
|
```
|
|
|
|
- In other other news, the database cleanup script has an issue again:
|
|
|
|
```
|
|
$ dspace cleanup -v
|
|
...
|
|
Error: ERROR: update or delete on table "bitstream" violates foreign key constraint "bundle_primary_bitstream_id_fkey" on table "bundle"
|
|
Detail: Key (bitstream_id)=(151626) is still referenced from table "bundle".
|
|
```
|
|
|
|
- The solution is, as always:
|
|
|
|
```
|
|
$ psql dspace -c 'update bundle set primary_bitstream_id=NULL where primary_bitstream_id in (151626);'
|
|
UPDATE 1
|
|
```
|
|
|
|
- Looking at abandoned connections in Tomcat:
|
|
|
|
```
|
|
# zcat /var/log/tomcat7/catalina.out.[1-9].gz | grep -c 'org.apache.tomcat.jdbc.pool.ConnectionPool abandon'
|
|
2115
|
|
```
|
|
|
|
- Apparently from these stacktraces we should be able to see which code is not closing connections properly
|
|
- Here's a pretty good overview of days where we had database issues recently:
|
|
|
|
```
|
|
# zcat /var/log/tomcat7/catalina.out.[1-9].gz | grep 'org.apache.tomcat.jdbc.pool.ConnectionPool abandon' | awk '{print $1,$2, $3}' | sort | uniq -c | sort -n
|
|
1 Feb 18, 2018
|
|
1 Feb 19, 2018
|
|
1 Feb 20, 2018
|
|
1 Feb 24, 2018
|
|
2 Feb 13, 2018
|
|
3 Feb 17, 2018
|
|
5 Feb 16, 2018
|
|
5 Feb 23, 2018
|
|
5 Feb 27, 2018
|
|
6 Feb 25, 2018
|
|
40 Feb 14, 2018
|
|
63 Feb 28, 2018
|
|
154 Mar 19, 2018
|
|
202 Feb 21, 2018
|
|
264 Feb 26, 2018
|
|
268 Mar 21, 2018
|
|
524 Feb 22, 2018
|
|
570 Feb 15, 2018
|
|
```
|
|
|
|
- In Tomcat 8.5 the `removeAbandoned` property has been split into two: `removeAbandonedOnBorrow` and `removeAbandonedOnMaintenance`
|
|
- See: https://tomcat.apache.org/tomcat-8.5-doc/jndi-datasource-examples-howto.html#Database_Connection_Pool_(DBCP_2)_Configurations
|
|
- I assume we want `removeAbandonedOnBorrow` and make updates to the Tomcat 8 templates in Ansible
|
|
- After reading more documentation I see that Tomcat 8.5's default DBCP seems to now be Commons DBCP2 instead of Tomcat DBCP
|
|
- It can be overridden in Tomcat's _server.xml_ by setting `factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"` in the `<Resource>`
|
|
- I think we should use this default, so we'll need to remove some other settings that are specific to Tomcat's DBCP like `jdbcInterceptors` and `abandonWhenPercentageFull`
|
|
- Merge the changes adding ORCID identifier to advanced search and Atmire Listings and Reports ([#371](https://github.com/ilri/DSpace/pull/371))
|
|
- Fix one more issue of missing XMLUI strings (for CRP subject when clicking "view more" in the Discovery sidebar)
|
|
- I told Udana to fix the citation and abstract of the one item, and to correct the `dc.language.iso` for the five Spanish items in his Book Chapters collection
|
|
- Then we can import the records to CGSpace
|
|
|
|
## 2018-04-11
|
|
|
|
- DSpace Test (linode19) crashed again some time since yesterday:
|
|
|
|
```
|
|
# grep -c 'java.lang.OutOfMemoryError: Java heap space' /var/log/tomcat7/catalina.out
|
|
168
|
|
```
|
|
|
|
- I ran all system updates and rebooted the server
|
|
|
|
## 2018-04-12
|
|
|
|
- I caught wind of an interesting XMLUI performance optimization coming in DSpace 6.3: https://jira.duraspace.org/browse/DS-3883
|
|
- I asked for it to be ported to DSpace 5.x
|
|
|
|
## 2018-04-13
|
|
|
|
- Add `PII-LAM_CSAGender` to CCAFS Phase II project tags in `input-forms.xml`
|
|
|
|
## 2018-04-15
|
|
|
|
- While testing an XMLUI patch for [DS-3883](https://jira.duraspace.org/browse/DS-3883) I noticed that there is still some remaining Authority / Solr configuration left that we need to remove:
|
|
|
|
```
|
|
2018-04-14 18:55:25,841 ERROR org.dspace.authority.AuthoritySolrServiceImpl @ Authority solr is not correctly configured, check "solr.authority.server" property in the dspace.cfg
|
|
java.lang.NullPointerException
|
|
```
|
|
|
|
- I assume we need to remove `authority` from the consumers in `dspace/config/dspace.cfg`:
|
|
|
|
```
|
|
event.dispatcher.default.consumers = authority, versioning, discovery, eperson, harvester, statistics,batchedit, versioningmqm
|
|
```
|
|
|
|
- I see the same error on DSpace Test so this is definitely a problem
|
|
- After disabling the authority consumer I no longer see the error
|
|
- I merged a pull request to the `5_x-prod` branch to clean that up ([#372](https://github.com/ilri/DSpace/pull/372))
|
|
- File a ticket on DSpace's Jira for the `target="_blank"` security and performance issue ([DS-3891](https://jira.duraspace.org/browse/DS-3891))
|
|
- I re-deployed DSpace Test (linode19) and was surprised by how long it took the ant update to complete:
|
|
|
|
```
|
|
BUILD SUCCESSFUL
|
|
Total time: 4 minutes 12 seconds
|
|
```
|
|
|
|
- The Linode block storage is much slower than the instance storage
|
|
- I ran all system updates and rebooted DSpace Test (linode19)
|
|
|
|
## 2018-04-16
|
|
|
|
- Communicate with Bioversity about their project to migrate their e-Library (Typo3) and Sci-lit databases to CGSpace
|
|
|
|
## 2018-04-18
|
|
|
|
- IWMI people are asking about building a search query that outputs RSS for their reports
|
|
- They want the same results as this Discovery query: https://cgspace.cgiar.org/discover?filtertype_1=dateAccessioned&filter_relational_operator_1=contains&filter_1=2018&submit_apply_filter=&query=&scope=10568%2F16814&rpp=100&sort_by=dc.date.issued_dt&order=desc
|
|
- They will need to use OpenSearch, but I can't remember all the parameters
|
|
- Apparently search sort options for OpenSearch are in `dspace.cfg`:
|
|
|
|
```
|
|
webui.itemlist.sort-option.1 = title:dc.title:title
|
|
webui.itemlist.sort-option.2 = dateissued:dc.date.issued:date
|
|
webui.itemlist.sort-option.3 = dateaccessioned:dc.date.accessioned:date
|
|
webui.itemlist.sort-option.4 = type:dc.type:text
|
|
```
|
|
|
|
- They want items by issue date, so we need to use sort option 2
|
|
- According to the DSpace Manual there are only the following parameters to OpenSearch: format, scope, rpp, start, and sort_by
|
|
- The OpenSearch `query` parameter expects a Discovery search filter that is defined in `dspace/config/spring/api/discovery.xml`
|
|
- So for IWMI they should be able to use something like this: https://cgspace.cgiar.org/open-search/discover?query=dateIssued:2018&scope=10568/16814&sort_by=2&order=DESC&format=rss
|
|
- There are also `rpp` (results per page) and `start` parameters but in my testing now on DSpace 5.5 they behave very strangely
|
|
- For example, set `rpp=1` and then check the results for `start` values of 0, 1, and 2 and they are all the same!
|
|
- If I have time I will check if this behavior persists on DSpace 6.x on the official DSpace demo and file a bug
|
|
- Also, the DSpace Manual as of 5.x has very poor documentation for OpenSearch
|
|
- They don't tell you to use Discovery search filters in the `query` (with format `query=dateIssued:2018`)
|
|
- They don't tell you that the sort options are actually defined in `dspace.cfg` (ie, you need to use `2` instead of `dc.date.issued_dt`)
|
|
- They are missing the `order` parameter (ASC vs DESC)
|
|
- I notice that DSpace Test has crashed again, due to memory:
|
|
|
|
```
|
|
# grep -c 'java.lang.OutOfMemoryError: Java heap space' /var/log/tomcat7/catalina.out
|
|
178
|
|
```
|
|
|
|
- I will increase the JVM heap size from 5120M to 6144M, though we don't have much room left to grow as DSpace Test (linode19) is using a smaller instance size than CGSpace
|
|
- Gabriela from CIP asked if I could send her a list of all CIP authors so she can do some replacements on the name formats
|
|
- I got a list of all the CIP collections manually and use the same query that I used in [August, 2017](/cgspace-notes/2017-08):
|
|
|
|
```
|
|
dspace#= \copy (select distinct text_value, count(*) from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 AND resource_id IN (select item_id from collection2item where collection_id IN (select resource_id from handle where handle in ('10568/89347', '10568/88229', '10568/53086', '10568/53085', '10568/69069', '10568/53087', '10568/53088', '10568/53089', '10568/53090', '10568/53091', '10568/53092', '10568/70150', '10568/53093', '10568/64874', '10568/53094'))) group by text_value order by count desc) to /tmp/cip-authors.csv with csv;
|
|
```
|
|
|
|
## 2018-04-19
|
|
|
|
- Run updates on DSpace Test (linode19) and reboot the server
|
|
- Also try deploying updated GeoLite database during ant update while re-deploying code:
|
|
|
|
```
|
|
$ ant update update_geolite clean_backups
|
|
```
|
|
|
|
- I also re-deployed CGSpace (linode18) to make the ORCID search, authority cleanup, CCAFS project tag `PII-LAM_CSAGender` live
|
|
- When re-deploying I also updated the GeoLite databases so I hope the country stats become more accurate...
|
|
- After re-deployment I ran all system updates on the server and rebooted it
|
|
- After the reboot I forced a reïndexing of the Discovery to populate the new ORCID index:
|
|
|
|
```
|
|
$ time schedtool -D -e ionice -c2 -n7 nice -n19 dspace index-discovery -b
|
|
|
|
real 73m42.635s
|
|
user 8m15.885s
|
|
sys 2m2.687s
|
|
```
|
|
|
|
- This time is with about 70,000 items in the repository
|
|
|
|
## 2018-04-20
|
|
|
|
- Gabriela from CIP emailed to say that CGSpace was returning a white page, but I haven't seen any emails from UptimeRobot
|
|
- I confirm that it's just giving a white page around 4:16
|
|
- The DSpace logs show that there are no database connections:
|
|
|
|
```
|
|
org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-127.0.0.1-8443-exec-715] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:250; busy:18; idle:0; lastwait:5000].
|
|
```
|
|
|
|
- And there have been shit tons of errors in the last (starting only 20 minutes ago luckily):
|
|
|
|
```
|
|
# grep -c 'org.apache.tomcat.jdbc.pool.PoolExhaustedException' /home/cgspace.cgiar.org/log/dspace.log.2018-04-20
|
|
32147
|
|
```
|
|
|
|
- I can't even log into PostgreSQL as the `postgres` user, WTF?
|
|
|
|
```
|
|
$ psql -c 'select * from pg_stat_activity' | grep -o -E '(dspaceWeb|dspaceApi|dspaceCli)' | sort | uniq -c
|
|
^C
|
|
```
|
|
|
|
- Here are the most active IPs today:
|
|
|
|
```
|
|
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "20/Apr/2018" | awk '{print $1}' | sort | uniq -c | sort -n | tail -n 10
|
|
917 207.46.13.182
|
|
935 213.55.99.121
|
|
970 40.77.167.134
|
|
978 207.46.13.80
|
|
1422 66.249.64.155
|
|
1577 50.116.102.77
|
|
2456 95.108.181.88
|
|
3216 104.196.152.243
|
|
4325 70.32.83.92
|
|
10718 45.5.184.2
|
|
```
|
|
|
|
- It doesn't even seem like there is a lot of traffic compared to the previous days:
|
|
|
|
```
|
|
# zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "20/Apr/2018" | wc -l
|
|
74931
|
|
# zcat --force /var/log/nginx/*.log.1 /var/log/nginx/*.log.2.gz| grep -E "19/Apr/2018" | wc -l
|
|
91073
|
|
# zcat --force /var/log/nginx/*.log.2.gz /var/log/nginx/*.log.3.gz| grep -E "18/Apr/2018" | wc -l
|
|
93459
|
|
```
|
|
|
|
- I tried to restart Tomcat but `systemctl` hangs
|
|
- I tried to reboot the server from the command line but after a few minutes it didn't come back up
|
|
- Looking at the Linode console I see that it is stuck trying to shut down
|
|
- Even "Reboot" via Linode console doesn't work!
|
|
- After shutting it down a few times via the Linode console it finally rebooted
|
|
- Everything is back but I have no idea what caused this—I suspect something with the hosting provider
|
|
- Also super weird, the last entry in the DSpace log file is from `2018-04-20 16:35:09`, and then immediately it goes to `2018-04-20 19:15:04` (three hours later!):
|
|
|
|
```
|
|
2018-04-20 16:35:09,144 ERROR org.dspace.app.util.AbstractDSpaceWebapp @ Failed to record shutdown in Webapp table.
|
|
org.apache.tomcat.jdbc.pool.PoolExhaustedException: [localhost-startStop-2] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:250; busy:18; idle
|
|
:0; lastwait:5000].
|
|
at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:685)
|
|
at org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:187)
|
|
at org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:128)
|
|
at org.dspace.storage.rdbms.DatabaseManager.getConnection(DatabaseManager.java:632)
|
|
at org.dspace.core.Context.init(Context.java:121)
|
|
at org.dspace.core.Context.<init>(Context.java:95)
|
|
at org.dspace.app.util.AbstractDSpaceWebapp.deregister(AbstractDSpaceWebapp.java:97)
|
|
at org.dspace.app.util.DSpaceContextListener.contextDestroyed(DSpaceContextListener.java:146)
|
|
at org.apache.catalina.core.StandardContext.listenerStop(StandardContext.java:5115)
|
|
at org.apache.catalina.core.StandardContext.stopInternal(StandardContext.java:5779)
|
|
at org.apache.catalina.util.LifecycleBase.stop(LifecycleBase.java:224)
|
|
at org.apache.catalina.core.ContainerBase$StopChild.call(ContainerBase.java:1588)
|
|
at org.apache.catalina.core.ContainerBase$StopChild.call(ContainerBase.java:1577)
|
|
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
|
|
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
|
|
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
|
|
at java.lang.Thread.run(Thread.java:748)
|
|
2018-04-20 19:15:04,006 INFO org.dspace.core.ConfigurationManager @ Loading from classloader: file:/home/cgspace.cgiar.org/config/dspace.cfg
|
|
```
|
|
|
|
- Very suspect!
|
|
|
|
## 2018-04-24
|
|
|
|
- Testing my Ansible playbooks with a clean and updated installation of Ubuntu 18.04 and I fixed some issues that I hadn't run into a few weeks ago
|
|
- There seems to be a new issue with Java dependencies, though
|
|
- The `default-jre` package is going to be Java 10 on Ubuntu 18.04, but I want to use `openjdk-8-jre-headless` (well, the JDK actually, but it uses this JRE)
|
|
- Tomcat and Ant are fine with Java 8, but the `maven` package wants to pull in Java 10 for some reason
|
|
- Looking closer, I see that `maven` depends on `java7-runtime-headless`, which is indeed provided by `openjdk-8-jre-headless`
|
|
- So it must be one of Maven's dependencies...
|
|
- I will watch it for a few days because it could be an issue that will be resolved before Ubuntu 18.04's release
|
|
- Otherwise I will post a bug to the ubuntu-release mailing list
|
|
- Looks like the only way to fix this is to install `openjdk-8-jdk-headless` before (so it pulls in the JRE) in a separate transaction, or to manually install `openjdk-8-jre-headless` in the same apt transaction as `maven`
|
|
- Also, I started porting PostgreSQL 9.6 into the Ansible infrastructure scripts
|
|
- This should be a drop in I believe, though I will definitely test it more locally as well as on DSpace Test once we move to DSpace 5.8 and Ubuntu 18.04 in the coming months
|
|
|
|
## 2018-04-25
|
|
|
|
- Still testing the [Ansible infrastructure playbooks](https://github.com/ilri/rmg-ansible-public) for Ubuntu 18.04, Tomcat 8.5, and PostgreSQL 9.6
|
|
- One other new thing I notice is that PostgreSQL 9.6 no longer uses `createuser` and `nocreateuser`, as those have actually meant `superuser` and `nosuperuser` and have been deprecated for *ten years*
|
|
- So for my notes, when I'm importing a CGSpace database dump I need to amend my notes to give super user permission to a user, rather than create user:
|
|
|
|
```
|
|
$ psql dspacetest -c 'alter user dspacetest superuser;'
|
|
$ pg_restore -O -U dspacetest -d dspacetest -W -h localhost /tmp/dspace_2018-04-18.backup
|
|
```
|
|
|
|
- There's another issue with Tomcat in Ubuntu 18.04:
|
|
|
|
```
|
|
25-Apr-2018 13:26:21.493 SEVERE [http-nio-127.0.0.1-8443-exec-1] org.apache.coyote.AbstractProtocol$ConnectionHandler.process Error reading request, ignored
|
|
java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
|
|
at org.apache.coyote.http11.Http11InputBuffer.init(Http11InputBuffer.java:688)
|
|
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:672)
|
|
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
|
|
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:790)
|
|
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1459)
|
|
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
|
|
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
|
|
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
|
|
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
|
|
at java.lang.Thread.run(Thread.java:748)
|
|
```
|
|
|
|
- There's a [Debian bug about this from a few weeks ago](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=895866)
|
|
- Apparently Tomcat was compiled with Java 9, so doesn't work with Java 8
|
|
|
|
## 2018-04-29
|
|
|
|
- DSpace Test crashed again, looks like memory issues again
|
|
- JVM heap size was last increased to 6144m but the system only has 8GB total so there's not much we can do here other than get a bigger Linode instance or remove the massive Solr Statistics data
|
|
|
|
## 2018-04-30
|
|
|
|
- DSpace Test crashed again
|
|
- I will email the CGSpace team to ask them whether or not we want to commit to having a public test server that accurately mirrors CGSpace (ie, to upgrade to the next largest Linode)
|