mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2025-01-27 05:49:12 +01:00
Add notes for 2017-09-12
This commit is contained in:
@ -55,3 +55,39 @@ dspace.log.2017-09-10:0
|
||||
- I remember seeing that Munin shows that the average number of connections is 50 (which is probably mostly from the XMLUI) and we're currently allowing 40 connections per app, so maybe it would be good to bump that value up to 50 or 60 along with the system's PostgreSQL `max_connections` (formula should be: webapps * 60 + 3, or 3 * 60 + 3 = 183 in our case)
|
||||
- I updated both CGSpace and DSpace Test to use these new settings (60 connections per web app and 183 for system PostgreSQL limit)
|
||||
- I'm expecting to see 0 connection errors for the next few months
|
||||
|
||||
## 2017-09-11
|
||||
|
||||
- Lots of work testing the CGIAR Library migration
|
||||
- Many technical notes and TODOs here: https://gist.github.com/alanorth/3579b74e116ab13418d187ed379abd9c
|
||||
|
||||
## 2017-09-12
|
||||
|
||||
- I was testing the [METS XSD caching during AIP ingest](https://wiki.duraspace.org/display/DSDOC5x/AIP+Backup+and+Restore#AIPBackupandRestore-AIPConfigurationsToImproveIngestionSpeedwhileValidating) but it doesn't seem to help actually
|
||||
- The import process takes the same amount of time with and without the caching
|
||||
- Also, I captured TCP packets destined for port 80 and both imports only captured ONE packet (an update check from some component in Java):
|
||||
|
||||
```
|
||||
$ sudo tcpdump -i en0 -w without-cached-xsd.dump dst port 80 and 'tcp[32:4] = 0x47455420'
|
||||
```
|
||||
|
||||
- Great TCP dump guide here: https://danielmiessler.com/study/tcpdump
|
||||
- The last part of that command filters for HTTP GET requests, of which there should have been many to fetch all the XSD files for validation
|
||||
- I sent a message to the mailing list to see if anyone knows more about this
|
||||
- In looking at the tcpdump results I notice that there is an update check to the ehcache server on _every_ iteration of the ingest loop, for example:
|
||||
|
||||
```
|
||||
09:39:36.008956 IP 192.168.8.124.50515 > 157.189.192.67.http: Flags [P.], seq 1736833672:1736834103, ack 147469926, win 4120, options [nop,nop,TS val 1175113331 ecr 550028064], length 431: HTTP: GET /kit/reflector?kitID=ehcache.default&pageID=update.properties&id=2130706433&os-name=Mac+OS+X&jvm-name=Java+HotSpot%28TM%29+64-Bit+Server+VM&jvm-version=1.8.0_144&platform=x86_64&tc-version=UNKNOWN&tc-product=Ehcache+Core+1.7.2&source=Ehcache+Core&uptime-secs=0&patch=UNKNOWN HTTP/1.1
|
||||
```
|
||||
|
||||
- Turns out this is a known issue and Ehcache has refused to make it opt-in: https://jira.terracotta.org/jira/browse/EHC-461
|
||||
- But we can disable it by adding an `updateCheck="false"` attribute to the main `<ehcache >` tag in `dspace-services/src/main/resources/caching/ehcache-config.xml`
|
||||
- After re-compiling and re-deploying DSpace I no longer see those update checks during item submission
|
||||
- I had a Skype call with Bram Luyten from Atmire to discuss various issues related to ORCID in DSpace
|
||||
- First, ORCID is deprecating their version 1 API (which DSpace uses) and in version 2 API they have removed the ability to search for users by name
|
||||
- The logic is that searching by name actually isn't very useful because ORCID is essentially a global phonebook and there are tons of legitimately duplicate and ambiguous names
|
||||
- Atmire's proposed integration would work by having users lookup and add authors to the authority core directly using their ORCID ID itself (this would happen during the item submission process or perhaps as a standalone / batch process, for example to populate the authority core with a list of known ORCIDs)
|
||||
- Once the association between name and ORCID is made in the authority then it can be autocompleted in the lookup field
|
||||
- Ideally there could also be a user interface for cleanup and merging of authorities
|
||||
- He will prepare a quote for us with keeping in mind that this could be useful to contribute back to the community for a 5.x release
|
||||
- As far as exposing ORCIDs as flat metadata along side all other metadata, he says this should be possible and will work on a quote for us
|
||||
|
Reference in New Issue
Block a user