diff --git a/content/posts/2019-04.md b/content/posts/2019-04.md index 9af7ddd45..9b07cba3e 100644 --- a/content/posts/2019-04.md +++ b/content/posts/2019-04.md @@ -457,4 +457,18 @@ $ ./fix-metadata-values.py -i /tmp/2019-04-08-fix-60-affiliations-apostrophes.cs $ ./fix-metadata-values.py -i /tmp/2019-04-08-fix-20-subject-apostrophes.csv -db dspace -u dspace -p 'fuuu' -f dc.subject -m 57 -t correct -d ``` +- UptimeRobot said that CGSpace (linode18) went down tonight + - The load average is at `9.42, 8.87, 7.87` + - I looked at PostgreSQL and see shitloads of connections there: + +``` +$ psql -c 'select * from pg_stat_activity' | grep -o -E '(dspaceWeb|dspaceApi|dspaceCli)' | sort | uniq -c + 5 dspaceApi + 7 dspaceCli + 250 dspaceWeb +``` + +- But still I see 10 to 30% CPU steal so I think it's probably a big factor +- Linode Support still didn't respond to my ticket from yesterday, so I attached a new output of `iostat 1 10` and asked them to move the VM to a less busy host + diff --git a/docs/2019-04/index.html b/docs/2019-04/index.html index 4c02bd271..e4acbe5ae 100644 --- a/docs/2019-04/index.html +++ b/docs/2019-04/index.html @@ -38,7 +38,7 @@ $ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-1-region.csv -db dspace - + @@ -81,9 +81,9 @@ $ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-1-region.csv -db dspace "@type": "BlogPosting", "headline": "April, 2019", "url": "https://alanorth.github.io/cgspace-notes/2019-04/", - "wordCount": "2631", + "wordCount": "2729", "datePublished": "2019-04-01T09:00:43+03:00", - "dateModified": "2019-04-08T11:26:20+03:00", + "dateModified": "2019-04-08T16:29:20+03:00", "author": { "@type": "Person", "name": "Alan Orth" @@ -692,6 +692,26 @@ COPY 20 $ ./fix-metadata-values.py -i /tmp/2019-04-08-fix-20-subject-apostrophes.csv -db dspace -u dspace -p 'fuuu' -f dc.subject -m 57 -t correct -d +
9.42, 8.87, 7.87
$ psql -c 'select * from pg_stat_activity' | grep -o -E '(dspaceWeb|dspaceApi|dspaceCli)' | sort | uniq -c
+ 5 dspaceApi
+ 7 dspaceCli
+ 250 dspaceWeb
+
+
+iostat 1 10
and asked them to move the VM to a less busy host