cgspace-notes/content/posts/2018-08.md

52 lines
3.0 KiB
Markdown
Raw Normal View History

2018-08-01 11:49:05 +02:00
---
title: "August, 2018"
date: 2018-08-01T11:52:54+03:00
author: "Alan Orth"
tags: ["Notes"]
---
## 2018-08-01
- DSpace Test had crashed at some point yesterday morning and I see the following in `dmesg`:
```
[Tue Jul 31 00:00:41 2018] Out of memory: Kill process 1394 (java) score 668 or sacrifice child
[Tue Jul 31 00:00:41 2018] Killed process 1394 (java) total-vm:15601860kB, anon-rss:5355528kB, file-rss:0kB, shmem-rss:0kB
[Tue Jul 31 00:00:41 2018] oom_reaper: reaped process 1394 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
```
- Judging from the time of the crash it was probably related to the Discovery indexing that starts at midnight
- From the DSpace log I see that eventually Solr stopped responding, so I guess the `java` process that was OOM killed above was Tomcat's
- I'm not sure why Tomcat didn't crash with an OutOfMemoryError...
- Anyways, perhaps I should increase the JVM heap from 5120m to 6144m like we did a few months ago when we tried to run the whole CGSpace Solr core
- The server only has 8GB of RAM so we'll eventually need to upgrade to a larger one because we'll start starving the OS, PostgreSQL, and command line batch processes
- I ran all system updates on DSpace Test and rebooted it
<!--more-->
2018-08-01 16:24:49 +02:00
- I started looking over the latest round of IITA batch records from Sisay on DSpace Test: [IITA July_30](https://dspacetest.cgiar.org/handle/10568/103250)
- incorrect authorship types
- dozens of inconsistencies, spelling mistakes, and white space in author affiliations
- minor issues in countries (California is not a country)
- minor issues in IITA subjects, ISBNs, languages, and AGROVOC subjects
2018-08-02 13:29:59 +02:00
## 2018-08-02
- DSpace Test crashed again and I don't see the only error I see is this in `dmesg`:
```
[Thu Aug 2 00:00:12 2018] Out of memory: Kill process 1407 (java) score 787 or sacrifice child
[Thu Aug 2 00:00:12 2018] Killed process 1407 (java) total-vm:18876328kB, anon-rss:6323836kB, file-rss:0kB, shmem-rss:0kB
```
- I am still assuming that this is the Tomcat process that is dying, so maybe actually we need to reduce its memory instead of increasing it?
- The risk we run there is that we'll start getting OutOfMemory errors from Tomcat
- So basically we need a new test server with more RAM very soon...
- Abenet asked about the workflow statistics in the Atmire CUA module again
- Last year Atmire told me that it's disabled by default but you can enable it with `workflow.stats.enabled = true` in the CUA configuration file
- There was a bug with adding users so they sent a patch, but I didn't merge it because it was [very dirty](https://github.com/ilri/DSpace/pull/319) and I wasn't sure it actually fixed the problem
- I just tried to enable the stats again on DSpace Test now that we're on DSpace 5.8 with updated Atmire modules, but every user I search for shows "No data available"
- As a test I submitted a new item and I was able to see it in the workflow statistics "data" tab, but not in the graph
2018-08-01 11:49:05 +02:00
<!-- vim: set sw=2 ts=2: -->