diff --git a/docs/2015-11/index.html b/docs/2015-11/index.html
index 869f9f11a..c96669c10 100644
--- a/docs/2015-11/index.html
+++ b/docs/2015-11/index.html
@@ -31,7 +31,7 @@ Last week I had increased the limit from 30 to 60, which seemed to help, but now
$ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspace
78
"/>
-
+
diff --git a/docs/2015-12/index.html b/docs/2015-12/index.html
index 7c046aee8..e955ad710 100644
--- a/docs/2015-12/index.html
+++ b/docs/2015-12/index.html
@@ -33,7 +33,7 @@ Replace lzop with xz in log compression cron jobs on DSpace Test—it uses less
-rw-rw-r-- 1 tomcat7 tomcat7 387K Nov 18 23:59 dspace.log.2015-11-18.lzo
-rw-rw-r-- 1 tomcat7 tomcat7 169K Nov 18 23:59 dspace.log.2015-11-18.xz
"/>
-
+
diff --git a/docs/2016-01/index.html b/docs/2016-01/index.html
index f386d24a4..4db926b29 100644
--- a/docs/2016-01/index.html
+++ b/docs/2016-01/index.html
@@ -25,7 +25,7 @@ Move ILRI collection 10568/12503 from 10568/27869 to 10568/27629 using the move_
I realized it is only necessary to clear the Cocoon cache after moving collections—rather than reindexing—as no metadata has changed, and therefore no search or browse indexes need to be updated.
Update GitHub wiki for documentation of maintenance tasks.
"/>
-
+
@@ -132,7 +132,7 @@ Update GitHub wiki for documentation of maintenance tasks.
Tweak date-based facets to show more values in drill-down ranges (#162)
Need to remember to clear the Cocoon cache after deployment or else you don’t see the new ranges immediately
Set up recipe on IFTTT to tweet new items from the CGSpace Atom feed to my twitter account
-
Altmetrics’ support for Handles is kinda weak, so they can’t associate our items with DOIs until they are tweeted or blogged, etc first.
+
Altmetrics' support for Handles is kinda weak, so they can’t associate our items with DOIs until they are tweeted or blogged, etc first.
2016-01-21
diff --git a/docs/2016-02/index.html b/docs/2016-02/index.html
index 9636d750c..1153fc919 100644
--- a/docs/2016-02/index.html
+++ b/docs/2016-02/index.html
@@ -35,7 +35,7 @@ I noticed we have a very interesting list of countries on CGSpace:
Not only are there 49,000 countries, we have some blanks (25)…
Also, lots of things like “COTE D`LVOIRE” and “COTE D IVOIRE”
"/>
-
+
@@ -299,7 +299,7 @@ CIAT_COLOMBIA_000169_Técnicas_para_el_aislamiento_y_cultivo_de_protoplastos_de_
Then you create a facet for blank values on each column, show the rows that have values for one and not the other, then transform each independently to have the contents of the other, with “||” in between
Work on Python script for parsing and downloading PDF records from dc.identifier.url
To get filenames from dc.identifier.url, create a new column based on this transform: forEach(value.split('||'), v, v.split('/')[-1]).join('||')
-
This also works for records that have multiple URLs (separated by “||”)
+
This also works for records that have multiple URLs (separated by “||")
2016-02-17
diff --git a/docs/2016-03/index.html b/docs/2016-03/index.html
index 9d32ee124..af3696d12 100644
--- a/docs/2016-03/index.html
+++ b/docs/2016-03/index.html
@@ -25,7 +25,7 @@ Looking at issues with author authorities on CGSpace
For some reason we still have the index-lucene-update cron job active on CGSpace, but I’m pretty sure we don’t need it as of the latest few versions of Atmire’s Listings and Reports module
Reinstall my local (Mac OS X) DSpace stack with Tomcat 7, PostgreSQL 9.3, and Java JDK 1.7 to match environment on CGSpace server
"/>
-
+
diff --git a/docs/2016-04/index.html b/docs/2016-04/index.html
index 31394adbc..5fbfa6e30 100644
--- a/docs/2016-04/index.html
+++ b/docs/2016-04/index.html
@@ -29,7 +29,7 @@ After running DSpace for over five years I’ve never needed to look in any
This will save us a few gigs of backup space we’re paying for on S3
Also, I noticed the checker log has some errors we should pay attention to:
"/>
-
+
@@ -147,7 +147,7 @@ java.io.FileNotFoundException: /home/cgspace.cgiar.org/assetstore/64/29/06/64290
******************************************************
So this would be the tomcat7 Unix user, who seems to have a default limit of 1024 files in its shell
-
For what it’s worth, we have been setting the actual Tomcat 7 process’ limit to 16384 for a few years (in /etc/default/tomcat7)
+
For what it’s worth, we have been setting the actual Tomcat 7 process' limit to 16384 for a few years (in /etc/default/tomcat7)
Looks like cron will read limits from /etc/security/limits.* so we can do something for the tomcat7 user there
Submit pull request for Tomcat 7 limits in Ansible dspace role (#30)
diff --git a/docs/2016-05/index.html b/docs/2016-05/index.html
index 5d8e3d26a..1425242ef 100644
--- a/docs/2016-05/index.html
+++ b/docs/2016-05/index.html
@@ -31,7 +31,7 @@ There are 3,000 IPs accessing the REST API in a 24-hour period!
# awk '{print $1}' /var/log/nginx/rest.log | uniq | wc -l
3168
"/>
-
+
diff --git a/docs/2016-06/index.html b/docs/2016-06/index.html
index 06e706e34..a727ceff5 100644
--- a/docs/2016-06/index.html
+++ b/docs/2016-06/index.html
@@ -31,7 +31,7 @@ This is their publications set: http://ebrary.ifpri.org/oai/oai.php?verb=ListRec
You can see the others by using the OAI ListSets verb: http://ebrary.ifpri.org/oai/oai.php?verb=ListSets
Working on second phase of metadata migration, looks like this will work for moving CPWF-specific data in dc.identifier.fund to cg.identifier.cpwfproject and then the rest to dc.description.sponsorship
"/>
-
+
diff --git a/docs/2016-07/index.html b/docs/2016-07/index.html
index 9de26e0fa..3d5760b57 100644
--- a/docs/2016-07/index.html
+++ b/docs/2016-07/index.html
@@ -41,7 +41,7 @@ dspacetest=# select text_value from metadatavalue where metadata_field_id=3 and
In this case the select query was showing 95 results before the update
"/>
-
+
diff --git a/docs/2016-08/index.html b/docs/2016-08/index.html
index 7d2185119..311270fb0 100644
--- a/docs/2016-08/index.html
+++ b/docs/2016-08/index.html
@@ -39,7 +39,7 @@ $ git checkout -b 55new 5_x-prod
$ git reset --hard ilri/5_x-prod
$ git rebase -i dspace-5.5
"/>
-
+
diff --git a/docs/2016-09/index.html b/docs/2016-09/index.html
index 04934cf53..135b10e13 100644
--- a/docs/2016-09/index.html
+++ b/docs/2016-09/index.html
@@ -31,7 +31,7 @@ It looks like we might be able to use OUs now, instead of DCs:
$ ldapsearch -x -H ldaps://svcgroot2.cgiarad.org:3269/ -b "dc=cgiarad,dc=org" -D "admigration1@cgiarad.org" -W "(sAMAccountName=admigration1)"
"/>
-
+
diff --git a/docs/2016-10/index.html b/docs/2016-10/index.html
index 03c04c044..b54ec8458 100644
--- a/docs/2016-10/index.html
+++ b/docs/2016-10/index.html
@@ -39,7 +39,7 @@ I exported a random item’s metadata as CSV, deleted all columns except id
0000-0002-6115-0956||0000-0002-3812-8793||0000-0001-7462-405X
"/>
-
+
diff --git a/docs/2016-11/index.html b/docs/2016-11/index.html
index 0142eabd6..682ca98ee 100644
--- a/docs/2016-11/index.html
+++ b/docs/2016-11/index.html
@@ -23,7 +23,7 @@ Add dc.type to the output options for Atmire’s Listings and Reports module
Add dc.type to the output options for Atmire’s Listings and Reports module (#286)
"/>
-
+
diff --git a/docs/2016-12/index.html b/docs/2016-12/index.html
index 12e60f42e..4ff1718bd 100644
--- a/docs/2016-12/index.html
+++ b/docs/2016-12/index.html
@@ -43,7 +43,7 @@ I see thousands of them in the logs for the last few months, so it’s not r
I’ve raised a ticket with Atmire to ask
Another worrying error from dspace.log is:
"/>
-
+
diff --git a/docs/2017-01/index.html b/docs/2017-01/index.html
index a42e75d99..ab22f0680 100644
--- a/docs/2017-01/index.html
+++ b/docs/2017-01/index.html
@@ -25,7 +25,7 @@ I checked to see if the Solr sharding task that is supposed to run on January 1s
I tested on DSpace Test as well and it doesn’t work there either
I asked on the dspace-tech mailing list because it seems to be broken, and actually now I’m not sure if we’ve ever had the sharding task run successfully over all these years
"/>
-
+
diff --git a/docs/2017-02/index.html b/docs/2017-02/index.html
index 53a568ffe..f9c60ecef 100644
--- a/docs/2017-02/index.html
+++ b/docs/2017-02/index.html
@@ -47,7 +47,7 @@ DELETE 1
Create issue on GitHub to track the addition of CCAFS Phase II project tags (#301)
Looks like we’ll be using cg.identifier.ccafsprojectpii as the field name
"/>
-
+
diff --git a/docs/2017-03/index.html b/docs/2017-03/index.html
index decbc8721..aa7f38e69 100644
--- a/docs/2017-03/index.html
+++ b/docs/2017-03/index.html
@@ -51,7 +51,7 @@ Interestingly, it seems DSpace 4.x’s thumbnails were sRGB, but forcing reg
$ identify ~/Desktop/alc_contrastes_desafios.jpg
/Users/aorth/Desktop/alc_contrastes_desafios.jpg JPEG 464x600 464x600+0+0 8-bit CMYK 168KB 0.000u 0:00.000
"/>
-
+
diff --git a/docs/2017-04/index.html b/docs/2017-04/index.html
index 6cf4153ca..96269a62a 100644
--- a/docs/2017-04/index.html
+++ b/docs/2017-04/index.html
@@ -37,7 +37,7 @@ Testing the CMYK patch on a collection with 650 items:
$ [dspace]/bin/dspace filter-media -f -i 10568/16498 -p "ImageMagick PDF Thumbnail" -v >& /tmp/filter-media-cmyk.txt
"/>
-
+
diff --git a/docs/2017-05/index.html b/docs/2017-05/index.html
index 74a7c7a36..2d8cab3d2 100644
--- a/docs/2017-05/index.html
+++ b/docs/2017-05/index.html
@@ -15,7 +15,7 @@
-
+
diff --git a/docs/2017-06/index.html b/docs/2017-06/index.html
index a89f94a6f..81f0d4a9d 100644
--- a/docs/2017-06/index.html
+++ b/docs/2017-06/index.html
@@ -15,7 +15,7 @@
-
+
diff --git a/docs/2017-07/index.html b/docs/2017-07/index.html
index 3d2328fed..c972abd36 100644
--- a/docs/2017-07/index.html
+++ b/docs/2017-07/index.html
@@ -33,7 +33,7 @@ Merge changes for WLE Phase II theme rename (#329)
Looking at extracting the metadata registries from ICARDA’s MEL DSpace database so we can compare fields with CGSpace
We can use PostgreSQL’s extended output format (-x) plus sed to format the output into quasi XML:
"/>
-
+
diff --git a/docs/2017-08/index.html b/docs/2017-08/index.html
index 0c5613b2a..bd1d308b6 100644
--- a/docs/2017-08/index.html
+++ b/docs/2017-08/index.html
@@ -57,7 +57,7 @@ This was due to newline characters in the dc.description.abstract column, which
I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using g/^$/d
Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet
"/>
-
+
diff --git a/docs/2017-09/index.html b/docs/2017-09/index.html
index d1f689400..36693adca 100644
--- a/docs/2017-09/index.html
+++ b/docs/2017-09/index.html
@@ -29,7 +29,7 @@ Linode sent an alert that CGSpace (linode18) was using 261% CPU for the past two
Ask Sisay to clean up the WLE approvers a bit, as Marianne’s user account is both in the approvers step as well as the group
"/>
-
+
diff --git a/docs/2017-10/index.html b/docs/2017-10/index.html
index 673953119..63575e276 100644
--- a/docs/2017-10/index.html
+++ b/docs/2017-10/index.html
@@ -31,7 +31,7 @@ http://hdl.handle.net/10568/78495||http://hdl.handle.net/10568/79336
There appears to be a pattern but I’ll have to look a bit closer and try to clean them up automatically, either in SQL or in OpenRefine
Add Katherine Lutz to the groups for content submission and edit steps of the CGIAR System collections
"/>
-
+
diff --git a/docs/2017-11/index.html b/docs/2017-11/index.html
index 671cc42bc..b7c6f7e0e 100644
--- a/docs/2017-11/index.html
+++ b/docs/2017-11/index.html
@@ -45,7 +45,7 @@ Generate list of authors on CGSpace for Peter to go through and correct:
dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors.csv with csv;
COPY 54701
"/>
-
+
diff --git a/docs/2017-12/index.html b/docs/2017-12/index.html
index 3348de63a..5360b5d96 100644
--- a/docs/2017-12/index.html
+++ b/docs/2017-12/index.html
@@ -27,7 +27,7 @@ The logs say “Timeout waiting for idle object”
PostgreSQL activity says there are 115 connections currently
The list of connections to XMLUI and REST API for today:
"/>
-
+
diff --git a/docs/2018-01/index.html b/docs/2018-01/index.html
index f9682954a..e3fd59630 100644
--- a/docs/2018-01/index.html
+++ b/docs/2018-01/index.html
@@ -147,7 +147,7 @@ dspace.log.2018-01-02:34
Danny wrote to ask for help renewing the wildcard ilri.org certificate and I advised that we should probably use Let’s Encrypt if it’s just a handful of domains
"/>
-
+
@@ -842,7 +842,7 @@ sys 0m2.210s
Discuss standardized names for CRPs and centers with ICARDA (don’t wait for CG Core)
Re-send DC rights implementation and forward to everyone so we can move forward with it (without the URI field for now)
Start looking at where I was with the AGROVOC API
-
Have a controlled vocabulary for CGIAR authors’ names and ORCIDs? Perhaps values like: Orth, Alan S. (0000-0002-1735-7458)
+
Have a controlled vocabulary for CGIAR authors' names and ORCIDs? Perhaps values like: Orth, Alan S. (0000-0002-1735-7458)
Need to find the metadata field name that ICARDA is using for their ORCIDs
Update text for DSpace version plan on wiki
Come up with an SLA, something like: In return for your contribution we will, to the best of our ability, ensure 99.5% (“two and a half nines”) uptime of CGSpace, ensure data is stored in open formats and safely backed up, follow CG Core metadata standards, …
diff --git a/docs/2018-02/index.html b/docs/2018-02/index.html
index d5a993335..8393de2db 100644
--- a/docs/2018-02/index.html
+++ b/docs/2018-02/index.html
@@ -27,7 +27,7 @@ We don’t need to distinguish between internal and external works, so that
Yesterday I figured out how to monitor DSpace sessions using JMX
I copied the logic in the jmx_tomcat_dbpools provided by Ubuntu’s munin-plugins-java package and used the stuff I discovered about JMX in 2018-01
"/>
-
+
@@ -538,7 +538,7 @@ $ grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' CGcenter_ORCID_ID_c
1204
Also, save that regex for the future because it will be very useful!
-
CIAT sent a list of their authors’ ORCIDs and combined with ours there are now 1227:
+
CIAT sent a list of their authors' ORCIDs and combined with ours there are now 1227:
Tom Desair from Atmire shared some extra JDBC pool parameters that might be useful on my thread on the dspace-tech mailing list:
abandonWhenPercentageFull: Only start cleaning up abandoned connections if the pool is used for more than X %.
-
jdbcInterceptors='ResetAbandonedTimer’: Make sure the “abondoned” timer is reset every time there is activity on a connection
+
jdbcInterceptors=‘ResetAbandonedTimer’: Make sure the “abondoned” timer is reset every time there is activity on a connection
I will try with abandonWhenPercentageFull='50'
diff --git a/docs/2018-03/index.html b/docs/2018-03/index.html
index 3a00467bb..20561edea 100644
--- a/docs/2018-03/index.html
+++ b/docs/2018-03/index.html
@@ -21,7 +21,7 @@ Export a CSV of the IITA community metadata for Martin Mueller
Export a CSV of the IITA community metadata for Martin Mueller
"/>
-
+
diff --git a/docs/2018-04/index.html b/docs/2018-04/index.html
index 7d79e8f77..bcbe0cdcf 100644
--- a/docs/2018-04/index.html
+++ b/docs/2018-04/index.html
@@ -23,7 +23,7 @@ Catalina logs at least show some memory errors yesterday:
I tried to test something on DSpace Test but noticed that it’s down since god knows when
Catalina logs at least show some memory errors yesterday:
"/>
-
+
diff --git a/docs/2018-05/index.html b/docs/2018-05/index.html
index 8a8b9fad1..a29a4ba19 100644
--- a/docs/2018-05/index.html
+++ b/docs/2018-05/index.html
@@ -35,7 +35,7 @@ http://localhost:3000/solr/statistics/update?stream.body=%3Ccommit/%3E
Then I reduced the JVM heap size from 6144 back to 5120m
Also, I switched it to use OpenJDK instead of Oracle Java, as well as re-worked the Ansible infrastructure scripts to support hosts choosing which distribution they want to use
"/>
-
+
@@ -367,7 +367,7 @@ $ ./bin/post -c countries ~/src/git/DSpace/2018-05-10-countries.csv
Discuss GDPR with James Stapleton
-
As far as I see it, we are “Data Controllers” on CGSpace because we store peoples’ names, emails, and phone numbers if they register
+
As far as I see it, we are “Data Controllers” on CGSpace because we store peoples' names, emails, and phone numbers if they register
We set cookies on the user’s computer, but these do not contain personally identifiable information (PII) and they are “session” cookies which are deleted when the user closes their browser
We use Google Analytics to track website usage, which makes Google the “Data Processor” and in this case we merely need to limit or obfuscate the information we send to them
As the only personally identifiable information we send is the user’s IP address, I think we only need to enable IP Address Anonymization in our analytics.js code snippets
diff --git a/docs/2018-06/index.html b/docs/2018-06/index.html
index b0f932e18..e6c93d30c 100644
--- a/docs/2018-06/index.html
+++ b/docs/2018-06/index.html
@@ -55,7 +55,7 @@ real 74m42.646s
user 8m5.056s
sys 2m7.289s
"/>
-
+
@@ -178,7 +178,7 @@ sys 2m7.289s
Institut National des Recherches Agricoles du B nin
Centre de Coop ration Internationale en Recherche Agronomique pour le D veloppement
Institut des Recherches Agricoles du B nin
-
Institut des Savannes, C te d’ Ivoire
+
Institut des Savannes, C te d' Ivoire
Institut f r Pflanzenpathologie und Pflanzenschutz der Universit t, Germany
Projet de Gestion des Ressources Naturelles, B nin
Universit t Hannover
@@ -424,7 +424,7 @@ delete from schema_version where version = '5.5.2015.12.03.3';
...
Done.
-
Elizabeth from CIAT contacted me to ask if I could add ORCID identifiers to all of Andy Jarvis’ items on CGSpace
+
Elizabeth from CIAT contacted me to ask if I could add ORCID identifiers to all of Andy Jarvis' items on CGSpace
$ ./add-orcid-identifiers-csv.py -i 2018-06-24-andy-jarvis-orcid.csv -db dspacetest -u dspacetest -p 'fuuu'
diff --git a/docs/2018-07/index.html b/docs/2018-07/index.html
index faac01115..99cbc83ae 100644
--- a/docs/2018-07/index.html
+++ b/docs/2018-07/index.html
@@ -33,7 +33,7 @@ During the mvn package stage on the 5.8 branch I kept getting issues with java r
There is insufficient memory for the Java Runtime Environment to continue.
"/>
-
+
diff --git a/docs/2018-08/index.html b/docs/2018-08/index.html
index 6874dcbb4..fc21a9b2e 100644
--- a/docs/2018-08/index.html
+++ b/docs/2018-08/index.html
@@ -43,7 +43,7 @@ Anyways, perhaps I should increase the JVM heap from 5120m to 6144m like we did
The server only has 8GB of RAM so we’ll eventually need to upgrade to a larger one because we’ll start starving the OS, PostgreSQL, and command line batch processes
I ran all system updates on DSpace Test and rebooted it
"/>
-
+
diff --git a/docs/2018-09/index.html b/docs/2018-09/index.html
index 37edc80c2..ea2d88cf1 100644
--- a/docs/2018-09/index.html
+++ b/docs/2018-09/index.html
@@ -27,7 +27,7 @@ I’ll update the DSpace role in our Ansible infrastructure playbooks and ru
Also, I’ll re-run the postgresql tasks because the custom PostgreSQL variables are dynamic according to the system’s RAM, and we never re-ran them after migrating to larger Linodes last month
I’m testing the new DSpace 5.8 branch in my Ubuntu 18.04 environment and I’m getting those autowire errors in Tomcat 8.5.30 again:
"/>
-
+
diff --git a/docs/2018-10/index.html b/docs/2018-10/index.html
index aa9e5a48a..cc4c685be 100644
--- a/docs/2018-10/index.html
+++ b/docs/2018-10/index.html
@@ -23,7 +23,7 @@ I created a GitHub issue to track this #389, because I’m super busy in Nai
Phil Thornton got an ORCID identifier so we need to add it to the list on CGSpace and tag his existing items
I created a GitHub issue to track this #389, because I’m super busy in Nairobi right now
"/>
-
+
diff --git a/docs/2018-11/index.html b/docs/2018-11/index.html
index 5c8ef97a4..cd984a2af 100644
--- a/docs/2018-11/index.html
+++ b/docs/2018-11/index.html
@@ -33,7 +33,7 @@ Send a note about my dspace-statistics-api to the dspace-tech mailing list
Linode has been sending mails a few times a day recently that CGSpace (linode18) has had high CPU usage
Today these are the top 10 IPs:
"/>
-
+
diff --git a/docs/2018-12/index.html b/docs/2018-12/index.html
index d20177ddb..844b97980 100644
--- a/docs/2018-12/index.html
+++ b/docs/2018-12/index.html
@@ -33,7 +33,7 @@ Then I ran all system updates and restarted the server
I noticed that there is another issue with PDF thumbnails on CGSpace, and I see there was another Ghostscript vulnerability last week
"/>
-
+
diff --git a/docs/2019-01/index.html b/docs/2019-01/index.html
index 51c8a5cfa..7508b4dbe 100644
--- a/docs/2019-01/index.html
+++ b/docs/2019-01/index.html
@@ -47,7 +47,7 @@ I don’t see anything interesting in the web server logs around that time t
357 207.46.13.1
903 54.70.40.11
"/>
-
+
diff --git a/docs/2019-02/index.html b/docs/2019-02/index.html
index 7fb2df775..2ae62027b 100644
--- a/docs/2019-02/index.html
+++ b/docs/2019-02/index.html
@@ -69,7 +69,7 @@ real 0m19.873s
user 0m22.203s
sys 0m1.979s
"/>
-
+
diff --git a/docs/2019-03/index.html b/docs/2019-03/index.html
index b1bba99b3..56bc6bc32 100644
--- a/docs/2019-03/index.html
+++ b/docs/2019-03/index.html
@@ -43,7 +43,7 @@ Most worryingly, there are encoding errors in the abstracts for eleven items, fo
I think I will need to ask Udana to re-copy and paste the abstracts with more care using Google Docs
"/>
-
+
@@ -847,7 +847,7 @@ org.postgresql.util.PSQLException: This statement has been closed.
Could be an error in the docs, as I see the Apache Commons DBCP has -1 as the default
Maybe I need to re-evaluate the “defauts” of Tomcat 7’s DBCP and set them explicitly in our config
-
From Tomcat 8 they seem to default to Apache Commons’ DBCP 2.x
+
From Tomcat 8 they seem to default to Apache Commons' DBCP 2.x
Also, CGSpace doesn’t have many Cocoon errors yet this morning:
diff --git a/docs/2019-04/index.html b/docs/2019-04/index.html
index 5c0e3a830..429ba8a02 100644
--- a/docs/2019-04/index.html
+++ b/docs/2019-04/index.html
@@ -61,7 +61,7 @@ $ ./fix-metadata-values.py -i /tmp/2019-02-21-fix-4-regions.csv -db dspace -u ds
$ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-2-countries.csv -db dspace -u dspace -p 'fuuu' -m 228 -f cg.coverage.country -d
$ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-1-region.csv -db dspace -u dspace -p 'fuuu' -m 231 -f cg.coverage.region -d
"/>
-
+
diff --git a/docs/2019-05/index.html b/docs/2019-05/index.html
index 825015acb..7278fae3f 100644
--- a/docs/2019-05/index.html
+++ b/docs/2019-05/index.html
@@ -45,7 +45,7 @@ DELETE 1
But after this I tried to delete the item from the XMLUI and it is still present…
"/>
-
+
diff --git a/docs/2019-06/index.html b/docs/2019-06/index.html
index ddd96d506..c347d09ce 100644
--- a/docs/2019-06/index.html
+++ b/docs/2019-06/index.html
@@ -31,7 +31,7 @@ Run system updates on CGSpace (linode18) and reboot it
Skype with Marie-Angélique and Abenet about CG Core v2
"/>
-
+
diff --git a/docs/2019-07/index.html b/docs/2019-07/index.html
index 82247ddb4..262f61372 100644
--- a/docs/2019-07/index.html
+++ b/docs/2019-07/index.html
@@ -35,7 +35,7 @@ CGSpace
Abenet had another similar issue a few days ago when trying to find the stats for 2018 in the RTB community
"/>
-
+
@@ -516,7 +516,7 @@ issn.validate('1020-3362')
I turned the Pandas script into a proper Python package called csv-metadata-quality
It supports CSV and Excel files
-
It fixes whitespace errors and erroneous multi-value separators (“|”) and validates ISSN, ISBNs, and dates
+
It fixes whitespace errors and erroneous multi-value separators ("|") and validates ISSN, ISBNs, and dates
Also I added a bunch of other checks/fixes for unnecessary and “suspicious” Unicode characters
I added fixes to drop duplicate metadata values
And lastly, I added validation of ISO 639-2 and ISO 639-3 languages
diff --git a/docs/2019-08/index.html b/docs/2019-08/index.html
index 963acfbec..269466689 100644
--- a/docs/2019-08/index.html
+++ b/docs/2019-08/index.html
@@ -43,7 +43,7 @@ After rebooting, all statistics cores were loaded… wow, that’s luck
Run system updates on DSpace Test (linode19) and reboot it
"/>
-
+
diff --git a/docs/2019-09/index.html b/docs/2019-09/index.html
index 016b59e51..aefbb675b 100644
--- a/docs/2019-09/index.html
+++ b/docs/2019-09/index.html
@@ -69,7 +69,7 @@ Here are the top ten IPs in the nginx XMLUI and REST/OAI logs this morning:
7249 2a01:7e00::f03c:91ff:fe18:7396
9124 45.5.186.2
"/>
-
+
diff --git a/docs/2019-10/index.html b/docs/2019-10/index.html
index c6b4d4ff7..b6a5e3980 100644
--- a/docs/2019-10/index.html
+++ b/docs/2019-10/index.html
@@ -15,7 +15,7 @@
-
+
diff --git a/docs/2019-11/index.html b/docs/2019-11/index.html
index 9e16bef93..12429bb5c 100644
--- a/docs/2019-11/index.html
+++ b/docs/2019-11/index.html
@@ -55,7 +55,7 @@ Let’s see how many of the REST API requests were for bitstreams (because t
# zcat --force /var/log/nginx/rest.log.*.gz | grep -E "[0-9]{1,2}/Oct/2019" | grep -c -E "/rest/bitstreams"
106781
"/>
-
+
diff --git a/docs/2019-12/index.html b/docs/2019-12/index.html
index 1b1813fb9..23bc0384b 100644
--- a/docs/2019-12/index.html
+++ b/docs/2019-12/index.html
@@ -43,7 +43,7 @@ Make sure all packages are up to date and the package manager is up to date, the
# dpkg -C
# reboot
"/>
-
+
@@ -150,7 +150,7 @@ Make sure all packages are up to date and the package manager is up to date, the
# tar czf 2019-12-01-linode18-etc.tar.gz /etc
Then check all third-party repositories in /etc/apt to see if everything using “xenial” has packages available for “bionic” and then update the sources:
-
# sed -i ’s/xenial/bionic/’ /etc/apt/sources.list.d/*.list
+
# sed -i ’s/xenial/bionic/' /etc/apt/sources.list.d/*.list
Pause the Uptime Robot monitoring for CGSpace
Make sure the update manager is installed and do the upgrade:
diff --git a/docs/2020-01/index.html b/docs/2020-01/index.html
index de5d8f2dc..53a646b70 100644
--- a/docs/2020-01/index.html
+++ b/docs/2020-01/index.html
@@ -53,7 +53,7 @@ I tweeted the CGSpace repository link
"/>
-
+
diff --git a/docs/2020-02/index.html b/docs/2020-02/index.html
index 7d3feff57..9bfd22668 100644
--- a/docs/2020-02/index.html
+++ b/docs/2020-02/index.html
@@ -35,7 +35,7 @@ The code finally builds and runs with a fresh install
"/>
-
+
diff --git a/docs/2020-03/index.html b/docs/2020-03/index.html
index c66bc7459..06165dc38 100644
--- a/docs/2020-03/index.html
+++ b/docs/2020-03/index.html
@@ -39,7 +39,7 @@ You need to download this into the DSpace 6.x source and compile it
"/>
-
+
@@ -444,7 +444,7 @@ $ tidy -xml -utf8 -iq -m -w 0 dspace/config/controlled-vocabularies/cg-creator-i
2020-03-29
-
Add two more Bioversity ORCID iDs to CGSpace and then tag ~70 of the authors’ existing publications in the database using this CSV with my add-orcid-identifiers-csv.py script:
+
Add two more Bioversity ORCID iDs to CGSpace and then tag ~70 of the authors' existing publications in the database using this CSV with my add-orcid-identifiers-csv.py script:
dc.contributor.author,cg.creator.id
"Snook, L.K.","Laura Snook: 0000-0002-9168-1301"
diff --git a/docs/2020-04/index.html b/docs/2020-04/index.html
index 7f2b4b8a5..dd098026e 100644
--- a/docs/2020-04/index.html
+++ b/docs/2020-04/index.html
@@ -45,7 +45,7 @@ The third item now has a donut with score 1 since I tweeted it last week
On the same note, the one item Abenet pointed out last week now has a donut with score of 104 after I tweeted it last week
"/>
-
+
@@ -353,7 +353,7 @@ Total number of bot hits purged: 8909
"Waters-Bayer, Ann","Ann Waters-Bayer: 0000-0003-1887-7903"
"Klerkx, Laurens","Laurens Klerkx: 0000-0002-1664-886X"
-
I confirmed some of the authors’ names from the report itself, then by looking at their profiles on ORCID.org
+
I confirmed some of the authors' names from the report itself, then by looking at their profiles on ORCID.org
Add new ILRI subject “COVID19” to the 5_x-prod branch
Add new CCAFS Phase II project tags to the 5_x-prod branch
I will deploy these to CGSpace in the next few days
diff --git a/docs/2020-05/index.html b/docs/2020-05/index.html
index eec5ff946..fe2e25840 100644
--- a/docs/2020-05/index.html
+++ b/docs/2020-05/index.html
@@ -31,7 +31,7 @@ I see that CGSpace (linode18) is still using PostgreSQL JDBC driver version 42.2
"/>
-
+
diff --git a/docs/2020-06/index.html b/docs/2020-06/index.html
index f50b3399d..2ececfb71 100644
--- a/docs/2020-06/index.html
+++ b/docs/2020-06/index.html
@@ -33,7 +33,7 @@ I sent Atmire the dspace.log from today and told them to log into the server to
In other news, I checked the statistics API on DSpace 6 and it’s working
I tried to build the OAI registry on the freshly migrated DSpace 6 on DSpace Test and I get an error:
"/>
-
+
diff --git a/docs/2020-07/index.html b/docs/2020-07/index.html
index 9193db52c..f0e945b6f 100644
--- a/docs/2020-07/index.html
+++ b/docs/2020-07/index.html
@@ -35,7 +35,7 @@ I restarted Tomcat and PostgreSQL and the issue was gone
Since I was restarting Tomcat anyways I decided to redeploy the latest changes from the 5_x-prod branch and I added a note about COVID-19 items to the CGSpace frontpage at Peter’s request
"/>
-
+
@@ -431,7 +431,7 @@ $ ./fix-metadata-values.py -i 2020-07-07-fix-sponsors.csv -db dspace -u dspace -
Yesterday Gabriela from CIP emailed to say that she was removing the accents from her authors’ names because of “funny character” issues with reports generated from CGSpace
+
Yesterday Gabriela from CIP emailed to say that she was removing the accents from her authors' names because of “funny character” issues with reports generated from CGSpace
I told her that it’s probably her Windows / Excel that is messing up the data, and she figured out how to open them correctly!
Now she says she doesn’t want to remove the accents after all and she sent me a new list of corrections
diff --git a/docs/2020-08/index.html b/docs/2020-08/index.html
index f74ebcaf3..0c27c8a6c 100644
--- a/docs/2020-08/index.html
+++ b/docs/2020-08/index.html
@@ -33,7 +33,7 @@ It is class based so I can easily add support for other vocabularies, and the te
"/>
-
+
diff --git a/docs/2020-09/index.html b/docs/2020-09/index.html
index 034ac2864..56d78ea6f 100644
--- a/docs/2020-09/index.html
+++ b/docs/2020-09/index.html
@@ -25,7 +25,7 @@ I filed an issue on OpenRXV to make some minor edits to the admin UI: https://gi
-
+
@@ -45,7 +45,7 @@ I filed a bug on OpenRXV: https://github.com/ilri/OpenRXV/issues/39
I filed an issue on OpenRXV to make some minor edits to the admin UI: https://github.com/ilri/OpenRXV/issues/40
"/>
-
+
@@ -55,9 +55,9 @@ I filed an issue on OpenRXV to make some minor edits to the admin UI: https://gi
"@type": "BlogPosting",
"headline": "September, 2020",
"url": "https://alanorth.github.io/cgspace-notes/2020-09/",
- "wordCount": "1757",
+ "wordCount": "1911",
"datePublished": "2020-09-02T15:35:54+03:00",
- "dateModified": "2020-09-12T19:53:57+03:00",
+ "dateModified": "2020-09-15T17:32:29+03:00",
"author": {
"@type": "Person",
"name": "Alan Orth"
@@ -426,6 +426,44 @@ Would fix 3 occurences of: SOUTHWEST ASIA
Then I uploaded them to CGSpace
+
2020-09-16
+
+
Looking further into Carlos Tejos’s question about integrating LandVoc (the AGROVOC subset) into DSpace
+
+
I see that you can actually get LandVoc concepts directly from AGROVOC’s SPARQL, for example with this query