diff --git a/content/posts/2019-08.md b/content/posts/2019-08.md index 2457657ff..652e487b4 100644 --- a/content/posts/2019-08.md +++ b/content/posts/2019-08.md @@ -71,4 +71,42 @@ or( - After removing the two duplicates there are now 1427 records - Fix one invalid ISSN: 1020-2002→1020-3362 +## 2019-08-07 + +- Daniel Haile-Michael asked about using a logical OR with the DSpace OpenSearch, but I looked in the DSpace manual and it does not seem to be possible + +## 2019-08-08 + +- Moayad noticed that the HTTPS certificate expired on the AReS dev server (linode20) + - The first problem was that there is a Docker container listening on port 80, so it conflicts with the ACME http-01 validation + - The second problem was that we only allow access to port 80 from localhost + - I adjusted the `renew-letsencrypt` systemd service so it stops/starts the Docker container and firewall: + +``` +# /opt/certbot-auto renew --standalone --pre-hook "/usr/bin/docker stop angular_nginx; /bin/systemctl stop firewalld" --post-hook "/bin/systemctl start firewalld; /usr/bin/docker start angular_nginx" +``` + +- It is important that the firewall starts back up before the Docker container or else Docker will complain about missing iptables chains +- Also, I updated to the latest TLS Intermediate settings as appropriate for Ubuntu 18.04's [OpenSSL 1.1.0g with nginx 1.16.0](https://ssl-config.mozilla.org/#server=nginx&server-version=1.16.0&config=intermediate&openssl-version=1.1.0g&hsts=false&ocsp=false) +- Run all system updates on AReS dev server (linode20) and reboot it +- Get a list of all PDFs from the Bioversity migration that fail to download and save them so I can try again with a different path in the URL: + +``` +$ ./generate-thumbnails.py -i /tmp/2019-08-05-Bioversity-Migration.csv -w --url-field-name url -d | tee /tmp/2019-08-08-download-pdfs.txt +$ grep -B1 "Download failed" /tmp/2019-08-08-download-pdfs.txt | grep "Downloading" | sed -e 's/> Downloading //' -e 's/\.\.\.//' | sed -r 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[mGK]//g' | csvcut -H -c 1,1 > /tmp/user-upload.csv +$ ./generate-thumbnails.py -i /tmp/user-upload.csv -w --url-field-name url -d | tee /tmp/2019-08-08-download-pdfs2.txt +$ grep -B1 "Download failed" /tmp/2019-08-08-download-pdfs2.txt | grep "Downloading" | sed -e 's/> Downloading //' -e 's/\.\.\.//' | sed -r 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[mGK]//g' | csvcut -H -c 1,1 > /tmp/user-upload2.csv +$ ./generate-thumbnails.py -i /tmp/user-upload2.csv -w --url-field-name url -d | tee /tmp/2019-08-08-download-pdfs3.txt +``` + +- (the weird sed regex removes color codes, because my generate-thumbnails script prints pretty colors) +- Some PDFs are uploaded in different paths so I have to try a few times to get them all: + - `/fileadmin/_migrated/uploads/tx_news/` + - `/fileadmin/user_upload/online_library/publications/pdfs/` + - `/fileadmin/user_upload/` + +- Even so, there are still 52 items with incorrect filenames, so I can't derive their PDF URLs... + - For example, `Wild_cherry_Prunus_avium_859.pdf` is here (with double underscore): https://www.bioversityinternational.org/fileadmin/_migrated/uploads/tx_news/Wild_cherry__Prunus_avium__859.pdf +- I will proceed with a metadata-only upload first and then let them know about the missing PDFs + diff --git a/docs/2015-11/index.html b/docs/2015-11/index.html index 03b12bb47..1d4aed23f 100644 --- a/docs/2015-11/index.html +++ b/docs/2015-11/index.html @@ -20,8 +20,8 @@ $ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspac " /> - - + + @@ -37,7 +37,7 @@ $ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspac 78 "/> - + diff --git a/docs/2015-12/index.html b/docs/2015-12/index.html index f06d004eb..8d86f1549 100644 --- a/docs/2015-12/index.html +++ b/docs/2015-12/index.html @@ -20,8 +20,8 @@ Replace lzop with xz in log compression cron jobs on DSpace Test—it uses less " /> - - + + @@ -37,7 +37,7 @@ Replace lzop with xz in log compression cron jobs on DSpace Test—it uses less -rw-rw-r-- 1 tomcat7 tomcat7 169K Nov 18 23:59 dspace.log.2015-11-18.xz "/> - + diff --git a/docs/2016-01/index.html b/docs/2016-01/index.html index ead7f6c75..6469dc13d 100644 --- a/docs/2016-01/index.html +++ b/docs/2016-01/index.html @@ -15,8 +15,8 @@ Update GitHub wiki for documentation of maintenance tasks. " /> - - + + @@ -27,7 +27,7 @@ Move ILRI collection 10568/12503 from 10568/27869 to 10568/27629 using the move_ I realized it is only necessary to clear the Cocoon cache after moving collections—rather than reindexing—as no metadata has changed, and therefore no search or browse indexes need to be updated. Update GitHub wiki for documentation of maintenance tasks. "/> - + diff --git a/docs/2016-02/index.html b/docs/2016-02/index.html index 25c08cb7f..b810e6e0f 100644 --- a/docs/2016-02/index.html +++ b/docs/2016-02/index.html @@ -22,8 +22,8 @@ Also, lots of things like “COTE D`LVOIRE” and “COTE D IVOIRE&r " /> - - + + @@ -41,7 +41,7 @@ I noticed we have a very interesting list of countries on CGSpace: Not only are there 49,000 countries, we have some blanks (25)… Also, lots of things like “COTE D`LVOIRE” and “COTE D IVOIRE” "/> - + diff --git a/docs/2016-03/index.html b/docs/2016-03/index.html index 583472024..d816c400a 100644 --- a/docs/2016-03/index.html +++ b/docs/2016-03/index.html @@ -15,8 +15,8 @@ Reinstall my local (Mac OS X) DSpace stack with Tomcat 7, PostgreSQL 9.3, and Ja " /> - - + + @@ -27,7 +27,7 @@ Looking at issues with author authorities on CGSpace For some reason we still have the index-lucene-update cron job active on CGSpace, but I’m pretty sure we don’t need it as of the latest few versions of Atmire’s Listings and Reports module Reinstall my local (Mac OS X) DSpace stack with Tomcat 7, PostgreSQL 9.3, and Java JDK 1.7 to match environment on CGSpace server "/> - + diff --git a/docs/2016-04/index.html b/docs/2016-04/index.html index 036a6e40c..a031e0478 100644 --- a/docs/2016-04/index.html +++ b/docs/2016-04/index.html @@ -17,8 +17,8 @@ Also, I noticed the checker log has some errors we should pay attention to: " /> - - + + @@ -31,7 +31,7 @@ After running DSpace for over five years I’ve never needed to look in any This will save us a few gigs of backup space we’re paying for on S3 Also, I noticed the checker log has some errors we should pay attention to: "/> - + diff --git a/docs/2016-05/index.html b/docs/2016-05/index.html index 84bcf3b64..a4b06a14f 100644 --- a/docs/2016-05/index.html +++ b/docs/2016-05/index.html @@ -20,8 +20,8 @@ There are 3,000 IPs accessing the REST API in a 24-hour period! " /> - - + + @@ -37,7 +37,7 @@ There are 3,000 IPs accessing the REST API in a 24-hour period! 3168 "/> - + diff --git a/docs/2016-06/index.html b/docs/2016-06/index.html index e5339bd22..1b29c95b6 100644 --- a/docs/2016-06/index.html +++ b/docs/2016-06/index.html @@ -18,8 +18,8 @@ Working on second phase of metadata migration, looks like this will work for mov " /> - - + + @@ -33,7 +33,7 @@ This is their publications set: http://ebrary.ifpri.org/oai/oai.php?verb=ListRec You can see the others by using the OAI ListSets verb: http://ebrary.ifpri.org/oai/oai.php?verb=ListSets Working on second phase of metadata migration, looks like this will work for moving CPWF-specific data in dc.identifier.fund to cg.identifier.cpwfproject and then the rest to dc.description.sponsorship "/> - + diff --git a/docs/2016-07/index.html b/docs/2016-07/index.html index 32ca02eb0..6a22c691c 100644 --- a/docs/2016-07/index.html +++ b/docs/2016-07/index.html @@ -25,8 +25,8 @@ In this case the select query was showing 95 results before the update " /> - - + + @@ -47,7 +47,7 @@ text_value In this case the select query was showing 95 results before the update "/> - + diff --git a/docs/2016-08/index.html b/docs/2016-08/index.html index 9cfdc11e7..3f2c6f24f 100644 --- a/docs/2016-08/index.html +++ b/docs/2016-08/index.html @@ -24,8 +24,8 @@ $ git rebase -i dspace-5.5 " /> - - + + @@ -45,7 +45,7 @@ $ git reset --hard ilri/5_x-prod $ git rebase -i dspace-5.5 "/> - + diff --git a/docs/2016-09/index.html b/docs/2016-09/index.html index 18b786dd6..bab4d6194 100644 --- a/docs/2016-09/index.html +++ b/docs/2016-09/index.html @@ -20,8 +20,8 @@ $ ldapsearch -x -H ldaps://svcgroot2.cgiarad.org:3269/ -b "dc=cgiarad,dc=or " /> - - + + @@ -37,7 +37,7 @@ It looks like we might be able to use OUs now, instead of DCs: $ ldapsearch -x -H ldaps://svcgroot2.cgiarad.org:3269/ -b "dc=cgiarad,dc=org" -D "admigration1@cgiarad.org" -W "(sAMAccountName=admigration1)" "/> - + diff --git a/docs/2016-10/index.html b/docs/2016-10/index.html index 31709c235..266aeb1f8 100644 --- a/docs/2016-10/index.html +++ b/docs/2016-10/index.html @@ -24,8 +24,8 @@ I exported a random item’s metadata as CSV, deleted all columns except id " /> - - + + @@ -45,7 +45,7 @@ I exported a random item’s metadata as CSV, deleted all columns except id 0000-0002-6115-0956||0000-0002-3812-8793||0000-0001-7462-405X "/> - + diff --git a/docs/2016-11/index.html b/docs/2016-11/index.html index adf5492ca..acbc12e43 100644 --- a/docs/2016-11/index.html +++ b/docs/2016-11/index.html @@ -15,8 +15,8 @@ Add dc.type to the output options for Atmire’s Listings and Reports module " /> - - + + @@ -27,7 +27,7 @@ Add dc.type to the output options for Atmire’s Listings and Reports module "/> - + diff --git a/docs/2016-12/index.html b/docs/2016-12/index.html index e3bf72955..28dffc24b 100644 --- a/docs/2016-12/index.html +++ b/docs/2016-12/index.html @@ -28,8 +28,8 @@ Another worrying error from dspace.log is: " /> - - + + @@ -53,7 +53,7 @@ I’ve raised a ticket with Atmire to ask Another worrying error from dspace.log is: "/> - + diff --git a/docs/2017-01/index.html b/docs/2017-01/index.html index f6f33051f..a579c5ce9 100644 --- a/docs/2017-01/index.html +++ b/docs/2017-01/index.html @@ -15,8 +15,8 @@ I asked on the dspace-tech mailing list because it seems to be broken, and actua " /> - - + + @@ -27,7 +27,7 @@ I checked to see if the Solr sharding task that is supposed to run on January 1s I tested on DSpace Test as well and it doesn’t work there either I asked on the dspace-tech mailing list because it seems to be broken, and actually now I’m not sure if we’ve ever had the sharding task run successfully over all these years "/> - + diff --git a/docs/2017-02/index.html b/docs/2017-02/index.html index d068e2f30..8a6748b1c 100644 --- a/docs/2017-02/index.html +++ b/docs/2017-02/index.html @@ -28,8 +28,8 @@ Looks like we’ll be using cg.identifier.ccafsprojectpii as the field name " /> - - + + @@ -53,7 +53,7 @@ Create issue on GitHub to track the addition of CCAFS Phase II project tags (#30 Looks like we’ll be using cg.identifier.ccafsprojectpii as the field name "/> - + diff --git a/docs/2017-03/index.html b/docs/2017-03/index.html index 35d9dbde3..49bc83217 100644 --- a/docs/2017-03/index.html +++ b/docs/2017-03/index.html @@ -32,8 +32,8 @@ $ identify ~/Desktop/alc_contrastes_desafios.jpg " /> - - + + @@ -61,7 +61,7 @@ $ identify ~/Desktop/alc_contrastes_desafios.jpg /Users/aorth/Desktop/alc_contrastes_desafios.jpg JPEG 464x600 464x600+0+0 8-bit CMYK 168KB 0.000u 0:00.000 "/> - + diff --git a/docs/2017-04/index.html b/docs/2017-04/index.html index 76e3fbee8..f50a1d911 100644 --- a/docs/2017-04/index.html +++ b/docs/2017-04/index.html @@ -25,8 +25,8 @@ $ [dspace]/bin/dspace filter-media -f -i 10568/16498 -p "ImageMagick PDF Th " /> - - + + @@ -47,7 +47,7 @@ Testing the CMYK patch on a collection with 650 items: $ [dspace]/bin/dspace filter-media -f -i 10568/16498 -p "ImageMagick PDF Thumbnail" -v >& /tmp/filter-media-cmyk.txt "/> - + diff --git a/docs/2017-05/index.html b/docs/2017-05/index.html index a55181b66..cdef89eef 100644 --- a/docs/2017-05/index.html +++ b/docs/2017-05/index.html @@ -9,13 +9,13 @@ - - + + - + diff --git a/docs/2017-06/index.html b/docs/2017-06/index.html index c32f293dd..f800b0078 100644 --- a/docs/2017-06/index.html +++ b/docs/2017-06/index.html @@ -9,13 +9,13 @@ - - + + - + diff --git a/docs/2017-07/index.html b/docs/2017-07/index.html index 3884b4960..d9db5f259 100644 --- a/docs/2017-07/index.html +++ b/docs/2017-07/index.html @@ -21,8 +21,8 @@ We can use PostgreSQL’s extended output format (-x) plus sed to format the " /> - - + + @@ -39,7 +39,7 @@ Merge changes for WLE Phase II theme rename (#329) Looking at extracting the metadata registries from ICARDA’s MEL DSpace database so we can compare fields with CGSpace We can use PostgreSQL’s extended output format (-x) plus sed to format the output into quasi XML: "/> - + diff --git a/docs/2017-08/index.html b/docs/2017-08/index.html index 1559fb4c9..e0a5d7b3c 100644 --- a/docs/2017-08/index.html +++ b/docs/2017-08/index.html @@ -31,8 +31,8 @@ Then I cleaned up the author authorities and HTML characters in OpenRefine and s " /> - - + + @@ -59,7 +59,7 @@ This was due to newline characters in the dc.description.abstract column, which I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using g/^$/d Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet "/> - + diff --git a/docs/2017-09/index.html b/docs/2017-09/index.html index 812eb4b26..24222c221 100644 --- a/docs/2017-09/index.html +++ b/docs/2017-09/index.html @@ -19,8 +19,8 @@ Ask Sisay to clean up the WLE approvers a bit, as Marianne’s user account " /> - - + + @@ -35,7 +35,7 @@ Linode sent an alert that CGSpace (linode18) was using 261% CPU for the past two Ask Sisay to clean up the WLE approvers a bit, as Marianne’s user account is both in the approvers step as well as the group "/> - + diff --git a/docs/2017-10/index.html b/docs/2017-10/index.html index 2eefc6f0a..2e1ba6b58 100644 --- a/docs/2017-10/index.html +++ b/docs/2017-10/index.html @@ -20,8 +20,8 @@ Add Katherine Lutz to the groups for content submission and edit steps of the CG " /> - - + + @@ -37,7 +37,7 @@ There appears to be a pattern but I’ll have to look a bit closer and try t Add Katherine Lutz to the groups for content submission and edit steps of the CGIAR System collections "/> - + diff --git a/docs/2017-11/index.html b/docs/2017-11/index.html index cc9b0d146..022df4b38 100644 --- a/docs/2017-11/index.html +++ b/docs/2017-11/index.html @@ -29,8 +29,8 @@ COPY 54701 " /> - - + + @@ -55,7 +55,7 @@ dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue COPY 54701 "/> - + diff --git a/docs/2017-12/index.html b/docs/2017-12/index.html index 25b7391a8..9b2a951f1 100644 --- a/docs/2017-12/index.html +++ b/docs/2017-12/index.html @@ -16,8 +16,8 @@ The list of connections to XMLUI and REST API for today: " /> - - + + @@ -29,7 +29,7 @@ The logs say “Timeout waiting for idle object” PostgreSQL activity says there are 115 connections currently The list of connections to XMLUI and REST API for today: "/> - + diff --git a/docs/2018-01/index.html b/docs/2018-01/index.html index 0d28aef84..71cfa2231 100644 --- a/docs/2018-01/index.html +++ b/docs/2018-01/index.html @@ -83,8 +83,8 @@ Danny wrote to ask for help renewing the wildcard ilri.org certificate and I adv " /> - - + + @@ -163,7 +163,7 @@ dspace.log.2018-01-02:34 Danny wrote to ask for help renewing the wildcard ilri.org certificate and I advised that we should probably use Let’s Encrypt if it’s just a handful of domains "/> - + diff --git a/docs/2018-02/index.html b/docs/2018-02/index.html index fe888396d..9d588890a 100644 --- a/docs/2018-02/index.html +++ b/docs/2018-02/index.html @@ -16,8 +16,8 @@ I copied the logic in the jmx_tomcat_dbpools provided by Ubuntu’s munin-pl " /> - - + + @@ -29,7 +29,7 @@ We don’t need to distinguish between internal and external works, so that Yesterday I figured out how to monitor DSpace sessions using JMX I copied the logic in the jmx_tomcat_dbpools provided by Ubuntu’s munin-plugins-java package and used the stuff I discovered about JMX in 2018-01 "/> - + diff --git a/docs/2018-03/index.html b/docs/2018-03/index.html index 447854e77..5e4c872d5 100644 --- a/docs/2018-03/index.html +++ b/docs/2018-03/index.html @@ -13,8 +13,8 @@ Export a CSV of the IITA community metadata for Martin Mueller " /> - - + + @@ -23,7 +23,7 @@ Export a CSV of the IITA community metadata for Martin Mueller Export a CSV of the IITA community metadata for Martin Mueller "/> - + @@ -494,24 +494,22 @@ java.lang.IllegalArgumentException: No choices plugin was configured for field
authority
columncg.creator.id
field we can nullify the authorities remaining in the database:Since we’ve migrated the ORCID identifiers associated with the authority data to the cg.creator.id
field we can nullify the authorities remaining in the database:
dspace=# UPDATE metadatavalue SET authority=NULL WHERE resource_type_id=2 AND metadata_field_id=3 AND authority IS NOT NULL;
UPDATE 195463
-
+After this the indexing works as usual and item counts and facets are back to normal
Send Peter a list of all authors to correct:
dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors.csv with csv header;
COPY 56156
-
+Afterwards we’ll want to do some batch tagging of ORCID identifiers to these names
CGSpace crashed again this afternoon, I’m not sure of the cause but there are a lot of SQL errors in the DSpace log:
diff --git a/docs/2018-04/index.html b/docs/2018-04/index.html index 16ae1cfaa..ca898739b 100644 --- a/docs/2018-04/index.html +++ b/docs/2018-04/index.html @@ -14,8 +14,8 @@ Catalina logs at least show some memory errors yesterday: " /> - - + + @@ -25,7 +25,7 @@ Catalina logs at least show some memory errors yesterday: I tried to test something on DSpace Test but noticed that it’s down since god knows when Catalina logs at least show some memory errors yesterday: "/> - + @@ -154,16 +154,15 @@ sys 2m2.498s$ ./add-orcid-identifiers-csv.py -i /tmp/jtohme-2018-04-04.csv -db dspace -u dspace -p 'fuuu'
The CSV format of jtohme-2018-04-04.csv
was:
The CSV format of jtohme-2018-04-04.csv
was:
dc.contributor.author,cg.creator.id
"Tohme, Joseph M.",Joe Tohme: 0000-0003-2765-7101
-
+Forests, Trees and Agroforestry
got messed upThere was a quoting error in my CRP CSV and the replacements for Forests, Trees and Agroforestry
got messed up
So I fixed them and had to re-index again!
I started preparing the git branch for the the DSpace 5.5→5.8 upgrade:
diff --git a/docs/2018-05/index.html b/docs/2018-05/index.html index 5cd736e86..a3fccf0fc 100644 --- a/docs/2018-05/index.html +++ b/docs/2018-05/index.html @@ -20,8 +20,8 @@ Also, I switched it to use OpenJDK instead of Oracle Java, as well as re-worked " /> - - + + @@ -37,7 +37,7 @@ http://localhost:3000/solr/statistics/update?stream.body=%3Ccommit/%3E Then I reduced the JVM heap size from 6144 back to 5120m Also, I switched it to use OpenJDK instead of Oracle Java, as well as re-worked the Ansible infrastructure scripts to support hosts choosing which distribution they want to use "/> - + diff --git a/docs/2018-06/index.html b/docs/2018-06/index.html index 255ea6398..67947ff1b 100644 --- a/docs/2018-06/index.html +++ b/docs/2018-06/index.html @@ -34,8 +34,8 @@ sys 2m7.289s " /> - - + + @@ -65,7 +65,7 @@ user 8m5.056s sys 2m7.289s "/> - + diff --git a/docs/2018-07/index.html b/docs/2018-07/index.html index 108b90e58..2fe82794b 100644 --- a/docs/2018-07/index.html +++ b/docs/2018-07/index.html @@ -21,8 +21,8 @@ There is insufficient memory for the Java Runtime Environment to continue. " /> - - + + @@ -39,7 +39,7 @@ During the mvn package stage on the 5.8 branch I kept getting issues with java r There is insufficient memory for the Java Runtime Environment to continue. "/> - + diff --git a/docs/2018-08/index.html b/docs/2018-08/index.html index aa6fd880b..a339b5021 100644 --- a/docs/2018-08/index.html +++ b/docs/2018-08/index.html @@ -30,8 +30,8 @@ I ran all system updates on DSpace Test and rebooted it " /> - - + + @@ -57,7 +57,7 @@ The server only has 8GB of RAM so we’ll eventually need to upgrade to a la I ran all system updates on DSpace Test and rebooted it "/> - + diff --git a/docs/2018-09/index.html b/docs/2018-09/index.html index da2861e52..8414ffa97 100644 --- a/docs/2018-09/index.html +++ b/docs/2018-09/index.html @@ -16,8 +16,8 @@ I’m testing the new DSpace 5.8 branch in my Ubuntu 18.04 environment and I " /> - - + + @@ -29,7 +29,7 @@ I’ll update the DSpace role in our Ansible infrastructure playbooks and ru Also, I’ll re-run the postgresql tasks because the custom PostgreSQL variables are dynamic according to the system’s RAM, and we never re-ran them after migrating to larger Linodes last month I’m testing the new DSpace 5.8 branch in my Ubuntu 18.04 environment and I’m getting those autowire errors in Tomcat 8.5.30 again: "/> - + diff --git a/docs/2018-10/index.html b/docs/2018-10/index.html index 87c1fe8b9..470b35c0f 100644 --- a/docs/2018-10/index.html +++ b/docs/2018-10/index.html @@ -14,8 +14,8 @@ I created a GitHub issue to track this #389, because I’m super busy in Nai " /> - - + + @@ -25,7 +25,7 @@ I created a GitHub issue to track this #389, because I’m super busy in Nai Phil Thornton got an ORCID identifier so we need to add it to the list on CGSpace and tag his existing items I created a GitHub issue to track this #389, because I’m super busy in Nairobi right now "/> - + diff --git a/docs/2018-11/index.html b/docs/2018-11/index.html index 981180c8d..93d06ce90 100644 --- a/docs/2018-11/index.html +++ b/docs/2018-11/index.html @@ -21,8 +21,8 @@ Today these are the top 10 IPs: " /> - - + + @@ -39,7 +39,7 @@ Send a note about my dspace-statistics-api to the dspace-tech mailing list Linode has been sending mails a few times a day recently that CGSpace (linode18) has had high CPU usage Today these are the top 10 IPs: "/> - + diff --git a/docs/2018-12/index.html b/docs/2018-12/index.html index 842b59b90..c6e3ce4f7 100644 --- a/docs/2018-12/index.html +++ b/docs/2018-12/index.html @@ -21,8 +21,8 @@ I noticed that there is another issue with PDF thumbnails on CGSpace, and I see " /> - - + + @@ -39,7 +39,7 @@ Then I ran all system updates and restarted the server I noticed that there is another issue with PDF thumbnails on CGSpace, and I see there was another Ghostscript vulnerability last week "/> - + diff --git a/docs/2019-01/index.html b/docs/2019-01/index.html index ede5d3902..f898488ff 100644 --- a/docs/2019-01/index.html +++ b/docs/2019-01/index.html @@ -28,8 +28,8 @@ I don’t see anything interesting in the web server logs around that time t " /> - - + + @@ -53,7 +53,7 @@ I don’t see anything interesting in the web server logs around that time t 903 54.70.40.11 "/> - + diff --git a/docs/2019-02/index.html b/docs/2019-02/index.html index 5161003de..04e177e3a 100644 --- a/docs/2019-02/index.html +++ b/docs/2019-02/index.html @@ -42,8 +42,8 @@ sys 0m1.979s " /> - - + + @@ -81,7 +81,7 @@ user 0m22.203s sys 0m1.979s "/> - + diff --git a/docs/2019-03/index.html b/docs/2019-03/index.html index 26930b471..de2b60b81 100644 --- a/docs/2019-03/index.html +++ b/docs/2019-03/index.html @@ -24,8 +24,8 @@ I think I will need to ask Udana to re-copy and paste the abstracts with more ca " /> - - + + @@ -45,7 +45,7 @@ Most worryingly, there are encoding errors in the abstracts for eleven items, fo I think I will need to ask Udana to re-copy and paste the abstracts with more care using Google Docs "/> - + diff --git a/docs/2019-04/index.html b/docs/2019-04/index.html index e4ebb2547..d5a15cb49 100644 --- a/docs/2019-04/index.html +++ b/docs/2019-04/index.html @@ -37,8 +37,8 @@ $ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-1-region.csv -db dspace " /> - - + + @@ -71,7 +71,7 @@ $ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-2-countries.csv -db dspa $ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-1-region.csv -db dspace -u dspace -p 'fuuu' -m 231 -f cg.coverage.region -d "/> - + diff --git a/docs/2019-05/index.html b/docs/2019-05/index.html index 4634043ab..f307b0103 100644 --- a/docs/2019-05/index.html +++ b/docs/2019-05/index.html @@ -27,8 +27,8 @@ But after this I tried to delete the item from the XMLUI and it is still present " /> - - + + @@ -51,7 +51,7 @@ DELETE 1 But after this I tried to delete the item from the XMLUI and it is still present… "/> - + diff --git a/docs/2019-06/index.html b/docs/2019-06/index.html index 8db0bad0c..8bba96d1a 100644 --- a/docs/2019-06/index.html +++ b/docs/2019-06/index.html @@ -20,8 +20,8 @@ Skype with Marie-Angélique and Abenet about CG Core v2 " /> - - + + @@ -37,7 +37,7 @@ Run system updates on CGSpace (linode18) and reboot it Skype with Marie-Angélique and Abenet about CG Core v2 "/> - + diff --git a/docs/2019-07/index.html b/docs/2019-07/index.html index da74f4861..15749fa01 100644 --- a/docs/2019-07/index.html +++ b/docs/2019-07/index.html @@ -20,8 +20,8 @@ Abenet had another similar issue a few days ago when trying to find the stats fo " /> - - + + @@ -37,7 +37,7 @@ CGSpace Abenet had another similar issue a few days ago when trying to find the stats for 2018 in the RTB community "/> - + diff --git a/docs/2019-08/index.html b/docs/2019-08/index.html index fcad7138e..7709ea8b0 100644 --- a/docs/2019-08/index.html +++ b/docs/2019-08/index.html @@ -26,8 +26,8 @@ Run system updates on DSpace Test (linode19) and reboot it " /> - - + + @@ -49,7 +49,7 @@ After rebooting, all statistics cores were loaded… wow, that’s luck Run system updates on DSpace Test (linode19) and reboot it "/> - + @@ -59,9 +59,9 @@ Run system updates on DSpace Test (linode19) and reboot it "@type": "BlogPosting", "headline": "August, 2019", "url": "https:\/\/alanorth.github.io\/cgspace-notes\/2019-08\/", - "wordCount": "433", + "wordCount": "793", "datePublished": "2019-08-03T12:39:51\x2b03:00", - "dateModified": "2019-08-06T20:07:44\x2b03:00", + "dateModified": "2019-08-06T20:11:27\x2b03:00", "author": { "@type": "Person", "name": "Alan Orth" @@ -211,6 +211,61 @@ isNotNull(value.match(/^.*û.*$/))Moayad noticed that the HTTPS certificate expired on the AReS dev server (linode20)
+ +I adjusted the renew-letsencrypt
systemd service so it stops/starts the Docker container and firewall:
# /opt/certbot-auto renew --standalone --pre-hook "/usr/bin/docker stop angular_nginx; /bin/systemctl stop firewalld" --post-hook "/bin/systemctl start firewalld; /usr/bin/docker start angular_nginx"
+
It is important that the firewall starts back up before the Docker container or else Docker will complain about missing iptables chains
Also, I updated to the latest TLS Intermediate settings as appropriate for Ubuntu 18.04’s OpenSSL 1.1.0g with nginx 1.16.0
Run all system updates on AReS dev server (linode20) and reboot it
Get a list of all PDFs from the Bioversity migration that fail to download and save them so I can try again with a different path in the URL:
+ +$ ./generate-thumbnails.py -i /tmp/2019-08-05-Bioversity-Migration.csv -w --url-field-name url -d | tee /tmp/2019-08-08-download-pdfs.txt
+$ grep -B1 "Download failed" /tmp/2019-08-08-download-pdfs.txt | grep "Downloading" | sed -e 's/> Downloading //' -e 's/\.\.\.//' | sed -r 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[mGK]//g' | csvcut -H -c 1,1 > /tmp/user-upload.csv
+$ ./generate-thumbnails.py -i /tmp/user-upload.csv -w --url-field-name url -d | tee /tmp/2019-08-08-download-pdfs2.txt
+$ grep -B1 "Download failed" /tmp/2019-08-08-download-pdfs2.txt | grep "Downloading" | sed -e 's/> Downloading //' -e 's/\.\.\.//' | sed -r 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[mGK]//g' | csvcut -H -c 1,1 > /tmp/user-upload2.csv
+$ ./generate-thumbnails.py -i /tmp/user-upload2.csv -w --url-field-name url -d | tee /tmp/2019-08-08-download-pdfs3.txt
+
(the weird sed regex removes color codes, because my generate-thumbnails script prints pretty colors)
Some PDFs are uploaded in different paths so I have to try a few times to get them all:
+ +/fileadmin/_migrated/uploads/tx_news/
/fileadmin/user_upload/online_library/publications/pdfs/
/fileadmin/user_upload/
Even so, there are still 52 items with incorrect filenames, so I can’t derive their PDF URLs…
+ +Wild_cherry_Prunus_avium_859.pdf
is here (with double underscore): https://www.bioversityinternational.org/fileadmin/_migrated/uploads/tx_news/Wild_cherry__Prunus_avium__859.pdfI will proceed with a metadata-only upload first and then let them know about the missing PDFs