diff --git a/content/post/2016-12.md b/content/post/2016-12.md
index cae6f1155..3c865e9d4 100644
--- a/content/post/2016-12.md
+++ b/content/post/2016-12.md
@@ -684,3 +684,16 @@ $ exit
```
$ [dspace]/bin/dspace metadata-export -a -f /tmp/iita.csv -i 10568/68616
```
+
+## 2016-12-28
+
+- We've been getting two alerts per day about CPU usage on the new server from Linode
+- These are caused by the batch jobs for Solr etc that run in the early morning hours
+- The Linode default is to alert at 90% CPU usage for two hours, but I see the old server was at 150%, so maybe we just need to adjust it
+- Speaking of the old server (linode01), I think we can decommission it now
+- I checked the S3 logs on the new server (linode18) to make sure the backups have been running and everything looks good
+- In other news, I was looking at the Munin graphs for PostgreSQL on the new server and it looks slightly worrying:
+
+![munin postgres stats](2016/12/postgres_size_ALL-week.png)
+
+- I will have to check later why the size keeps increasing
diff --git a/public/2015-11/index.html b/public/2015-11/index.html
index dae7e9ed3..6b1567f66 100644
--- a/public/2015-11/index.html
+++ b/public/2015-11/index.html
@@ -90,7 +90,7 @@ $ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspac
-
+
@@ -119,6 +119,14 @@ $ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspac
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2015-12/index.html b/public/2015-12/index.html
index 5ca3d2c95..e6605b2bc 100644
--- a/public/2015-12/index.html
+++ b/public/2015-12/index.html
@@ -93,7 +93,7 @@ Replace lzop with xz in log compression cron jobs on DSpace Test—it uses less
-
+
@@ -122,6 +122,14 @@ Replace lzop with xz in log compression cron jobs on DSpace Test—it uses less
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2016-01/index.html b/public/2016-01/index.html
index 7f3b110cd..76b2f9be8 100644
--- a/public/2016-01/index.html
+++ b/public/2016-01/index.html
@@ -78,7 +78,7 @@ Update GitHub wiki for documentation of maintenance tasks.
-
+
@@ -107,6 +107,14 @@ Update GitHub wiki for documentation of maintenance tasks.
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2016-02/index.html b/public/2016-02/index.html
index c35e532a9..853740b81 100644
--- a/public/2016-02/index.html
+++ b/public/2016-02/index.html
@@ -99,7 +99,7 @@ Also, lots of things like “COTE D`LVOIRE” and “COTE D IVOIRE&r
-
+
@@ -128,6 +128,14 @@ Also, lots of things like “COTE D`LVOIRE” and “COTE D IVOIRE&r
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2016-03/index.html b/public/2016-03/index.html
index 1be8e6bad..fed90dc5c 100644
--- a/public/2016-03/index.html
+++ b/public/2016-03/index.html
@@ -78,7 +78,7 @@ Reinstall my local (Mac OS X) DSpace stack with Tomcat 7, PostgreSQL 9.3, and Ja
-
+
@@ -107,6 +107,14 @@ Reinstall my local (Mac OS X) DSpace stack with Tomcat 7, PostgreSQL 9.3, and Ja
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2016-04/index.html b/public/2016-04/index.html
index 84be8b89b..f4042f3b2 100644
--- a/public/2016-04/index.html
+++ b/public/2016-04/index.html
@@ -84,7 +84,7 @@ Also, I noticed the checker log has some errors we should pay attention to:
-
+
@@ -113,6 +113,14 @@ Also, I noticed the checker log has some errors we should pay attention to:
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2016-05/index.html b/public/2016-05/index.html
index 4761a9600..18e9a59ac 100644
--- a/public/2016-05/index.html
+++ b/public/2016-05/index.html
@@ -90,7 +90,7 @@ There are 3,000 IPs accessing the REST API in a 24-hour period!
-
+
@@ -119,6 +119,14 @@ There are 3,000 IPs accessing the REST API in a 24-hour period!
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2016-06/index.html b/public/2016-06/index.html
index 831ee9346..04cc06ccd 100644
--- a/public/2016-06/index.html
+++ b/public/2016-06/index.html
@@ -87,7 +87,7 @@ Working on second phase of metadata migration, looks like this will work for mov
-
+
@@ -116,6 +116,14 @@ Working on second phase of metadata migration, looks like this will work for mov
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2016-07/index.html b/public/2016-07/index.html
index 61079dd6a..d0786b6f3 100644
--- a/public/2016-07/index.html
+++ b/public/2016-07/index.html
@@ -111,7 +111,7 @@ In this case the select query was showing 95 results before the update
-
+
@@ -140,6 +140,14 @@ In this case the select query was showing 95 results before the update
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2016-08/index.html b/public/2016-08/index.html
index 90ec6e09d..7d6033d16 100644
--- a/public/2016-08/index.html
+++ b/public/2016-08/index.html
@@ -102,7 +102,7 @@ $ git rebase -i dspace-5.5
-
+
@@ -131,6 +131,14 @@ $ git rebase -i dspace-5.5
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2016-09/index.html b/public/2016-09/index.html
index da15ab312..4b155adf3 100644
--- a/public/2016-09/index.html
+++ b/public/2016-09/index.html
@@ -90,7 +90,7 @@ $ ldapsearch -x -H ldaps://svcgroot2.cgiarad.org:3269/ -b "dc=cgiarad,dc=or
-
+
@@ -119,6 +119,14 @@ $ ldapsearch -x -H ldaps://svcgroot2.cgiarad.org:3269/ -b "dc=cgiarad,dc=or
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2016-10/index.html b/public/2016-10/index.html
index 28c69bc86..2a6808452 100644
--- a/public/2016-10/index.html
+++ b/public/2016-10/index.html
@@ -54,7 +54,7 @@
-
+
@@ -83,6 +83,14 @@
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2016-11/index.html b/public/2016-11/index.html
index 465e681ec..e7cfdacad 100644
--- a/public/2016-11/index.html
+++ b/public/2016-11/index.html
@@ -54,7 +54,7 @@
-
+
@@ -83,6 +83,14 @@
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
diff --git a/public/2016-12/index.html b/public/2016-12/index.html
index 1c00ff489..3ed76b40b 100644
--- a/public/2016-12/index.html
+++ b/public/2016-12/index.html
@@ -30,7 +30,7 @@
-
+
@@ -54,7 +54,7 @@
-
+
@@ -83,6 +83,14 @@
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
@@ -851,6 +859,23 @@ $ exit
$ [dspace]/bin/dspace metadata-export -a -f /tmp/iita.csv -i 10568/68616
+
2016-12-28
+
+
+
We’ve been getting two alerts per day about CPU usage on the new server from Linode
+
These are caused by the batch jobs for Solr etc that run in the early morning hours
+
The Linode default is to alert at 90% CPU usage for two hours, but I see the old server was at 150%, so maybe we just need to adjust it
+
Speaking of the old server (linode01), I think we can decommission it now
+
I checked the S3 logs on the new server (linode18) to make sure the backups have been running and everything looks good
+
In other news, I was looking at the Munin graphs for PostgreSQL on the new server and it looks slightly worrying:
+
+
+
+
+
+
I will have to check later why the size keeps increasing
+
+
diff --git a/public/2016/12/postgres_size_ALL-week.png b/public/2016/12/postgres_size_ALL-week.png
new file mode 100644
index 000000000..e2a6dabec
Binary files /dev/null and b/public/2016/12/postgres_size_ALL-week.png differ
diff --git a/public/index.html b/public/index.html
index 0d70a08b8..b88549926 100644
--- a/public/index.html
+++ b/public/index.html
@@ -36,7 +36,7 @@
-
+
@@ -69,6 +69,14 @@
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
@@ -344,7 +352,7 @@ dspacetest=# select text_value from metadatavalue where metadata_field_id=3 and
Previous page
- Next page
+ Next page
diff --git a/public/index.xml b/public/index.xml
index 94f7e800e..7e7d5b3c8 100644
--- a/public/index.xml
+++ b/public/index.xml
@@ -754,6 +754,23 @@ $ exit
<pre><code>$ [dspace]/bin/dspace metadata-export -a -f /tmp/iita.csv -i 10568/68616
</code></pre>
+
+<h2 id="2016-12-28">2016-12-28</h2>
+
+<ul>
+<li>We’ve been getting two alerts per day about CPU usage on the new server from Linode</li>
+<li>These are caused by the batch jobs for Solr etc that run in the early morning hours</li>
+<li>The Linode default is to alert at 90% CPU usage for two hours, but I see the old server was at 150%, so maybe we just need to adjust it</li>
+<li>Speaking of the old server (linode01), I think we can decommission it now</li>
+<li>I checked the S3 logs on the new server (linode18) to make sure the backups have been running and everything looks good</li>
+<li>In other news, I was looking at the Munin graphs for PostgreSQL on the new server and it looks slightly worrying:</li>
+</ul>
+
+<p><img src="2016/12/postgres_size_ALL-week.png" alt="munin postgres stats" /></p>
+
+<ul>
+<li>I will have to check later why the size keeps increasing</li>
+</ul>
diff --git a/public/page/2/index.html b/public/page/2/index.html
index 2ff7da9d6..830b3648f 100644
--- a/public/page/2/index.html
+++ b/public/page/2/index.html
@@ -36,7 +36,7 @@
-
+
@@ -69,6 +69,14 @@
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
@@ -206,7 +214,7 @@
@@ -344,7 +352,7 @@ dspacetest=# select text_value from metadatavalue where metadata_field_id=3 and
Previous page
- Next page
+ Next page
diff --git a/public/post/index.xml b/public/post/index.xml
index 1c795edc5..78795852c 100644
--- a/public/post/index.xml
+++ b/public/post/index.xml
@@ -1,9 +1,9 @@
- Post-rsses on CGSpace Notes
+ Posts on CGSpace Notes
https://alanorth.github.io/cgspace-notes/post/index.xml
- Recent content in Post-rsses on CGSpace Notes
+ Recent content in Posts on CGSpace NotesHugo -- gohugo.ioen-usFri, 02 Dec 2016 10:43:00 +0300
@@ -754,6 +754,23 @@ $ exit
<pre><code>$ [dspace]/bin/dspace metadata-export -a -f /tmp/iita.csv -i 10568/68616
</code></pre>
+
+<h2 id="2016-12-28">2016-12-28</h2>
+
+<ul>
+<li>We’ve been getting two alerts per day about CPU usage on the new server from Linode</li>
+<li>These are caused by the batch jobs for Solr etc that run in the early morning hours</li>
+<li>The Linode default is to alert at 90% CPU usage for two hours, but I see the old server was at 150%, so maybe we just need to adjust it</li>
+<li>Speaking of the old server (linode01), I think we can decommission it now</li>
+<li>I checked the S3 logs on the new server (linode18) to make sure the backups have been running and everything looks good</li>
+<li>In other news, I was looking at the Munin graphs for PostgreSQL on the new server and it looks slightly worrying:</li>
+</ul>
+
+<p><img src="2016/12/postgres_size_ALL-week.png" alt="munin postgres stats" /></p>
+
+<ul>
+<li>I will have to check later why the size keeps increasing</li>
+</ul>
diff --git a/public/post/page/2/index.html b/public/post/page/2/index.html
index 37a1688dd..a1a139507 100644
--- a/public/post/page/2/index.html
+++ b/public/post/page/2/index.html
@@ -36,7 +36,7 @@
-
+
@@ -69,6 +69,14 @@
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
@@ -206,7 +214,7 @@
@@ -344,7 +352,7 @@ dspacetest=# select text_value from metadatavalue where metadata_field_id=3 and
Previous page
- Next page
+ Next page
diff --git a/public/tags/notes/index.xml b/public/tags/notes/index.xml
index 677caca60..9cee53775 100644
--- a/public/tags/notes/index.xml
+++ b/public/tags/notes/index.xml
@@ -1,9 +1,9 @@
- CGSpace Notes
+ Notes on CGSpace Notes
https://alanorth.github.io/cgspace-notes/tags/notes/index.xml
- Recent content on CGSpace Notes
+ Recent content in Notes on CGSpace NotesHugo -- gohugo.ioen-us
@@ -753,6 +753,23 @@ $ exit
<pre><code>$ [dspace]/bin/dspace metadata-export -a -f /tmp/iita.csv -i 10568/68616
</code></pre>
+
+<h2 id="2016-12-28">2016-12-28</h2>
+
+<ul>
+<li>We’ve been getting two alerts per day about CPU usage on the new server from Linode</li>
+<li>These are caused by the batch jobs for Solr etc that run in the early morning hours</li>
+<li>The Linode default is to alert at 90% CPU usage for two hours, but I see the old server was at 150%, so maybe we just need to adjust it</li>
+<li>Speaking of the old server (linode01), I think we can decommission it now</li>
+<li>I checked the S3 logs on the new server (linode18) to make sure the backups have been running and everything looks good</li>
+<li>In other news, I was looking at the Munin graphs for PostgreSQL on the new server and it looks slightly worrying:</li>
+</ul>
+
+<p><img src="2016/12/postgres_size_ALL-week.png" alt="munin postgres stats" /></p>
+
+<ul>
+<li>I will have to check later why the size keeps increasing</li>
+</ul>
diff --git a/public/tags/notes/page/2/index.html b/public/tags/notes/page/2/index.html
index f9cc77c21..7e348ab53 100644
--- a/public/tags/notes/page/2/index.html
+++ b/public/tags/notes/page/2/index.html
@@ -36,7 +36,7 @@
-
+
@@ -69,6 +69,14 @@
Home
+ CGSpace Notes
+
+ Notes
+
+ Posts
+
+ Tags
+
@@ -206,7 +214,7 @@