mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2024-11-22 14:45:03 +01:00
Add notes
This commit is contained in:
parent
60cbb34a6a
commit
6558e9d932
175
content/posts/2021-07.md
Normal file
175
content/posts/2021-07.md
Normal file
@ -0,0 +1,175 @@
|
||||
---
|
||||
title: "July, 2021"
|
||||
date: 2021-06-01T08:53:07+03:00
|
||||
author: "Alan Orth"
|
||||
categories: ["Notes"]
|
||||
---
|
||||
|
||||
## 2021-07-01
|
||||
|
||||
- Export another list of ALL subjects on CGSpace, including AGROVOC and non-AGROVOC for Enrico:
|
||||
|
||||
```console
|
||||
localhost/dspace63= > \COPY (SELECT DISTINCT LOWER(text_value) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242, 187) GROUP BY subject ORDER BY count DESC) to /tmp/2021-07-01-all-subjects.csv WITH CSV HEADER;
|
||||
COPY 20994
|
||||
```
|
||||
|
||||
<!--more-->
|
||||
|
||||
## 2021-07-04
|
||||
|
||||
- Update all Docker containers on the AReS server (linode20) and rebuild OpenRXV:
|
||||
|
||||
```console
|
||||
$ cd OpenRXV
|
||||
$ docker-compose -f docker/docker-compose.yml down
|
||||
$ docker images | grep -v ^REPO | sed 's/ \+/:/g' | cut -d: -f1,2 | xargs -L1 docker pull
|
||||
$ docker-compose -f docker/docker-compose.yml build
|
||||
```
|
||||
|
||||
- Then run all system updates and reboot the server
|
||||
- After the server came back up I cloned the `openrxv-items-final` index to `openrxv-items-temp` and started the plugins
|
||||
- This will hopefully be faster than a full re-harvest...
|
||||
- I opened a few GitHub issues for OpenRXV bugs:
|
||||
- [Hide "metadata structure" section in repository setup](https://github.com/ilri/OpenRXV/issues/103)
|
||||
- [Improve "start plugins" and "commit indexing" buttons](https://github.com/ilri/OpenRXV/issues/104)
|
||||
- [Allow running plugins individually](https://github.com/ilri/OpenRXV/issues/105)
|
||||
- [Hide the "DSpace add missing items"](https://github.com/ilri/OpenRXV/issues/106)
|
||||
- Rebuild DSpace Test (linode26) from a fresh Ubuntu 20.04 image on Linode
|
||||
- The start plugins on AReS had seventy-five errors from the `dspace_add_missing_items` plugin for some reason so I had to start a fresh indexing
|
||||
- I noticed that the WorldFish data has dozens of incorrect countries so I should talk to Salem about that because they manage it
|
||||
- Also I noticed that we weren't using the Country formatter in OpenRXV for the WorldFish country field, so some values don't get mapped properly
|
||||
- I added some value mappings to fix some issues with WorldFish data and added a few more fields to the repository harvesting config and started a fresh re-indexing
|
||||
|
||||
## 2021-07-05
|
||||
|
||||
- The AReS harvesting last night succeeded and I started the plugins
|
||||
- Margarita from CCAFS asked if we can create a new field for AICCRA publications
|
||||
- I asked her to clarify what they want
|
||||
- AICCRA is an initiative so it might be better to create new field for that, for example `cg.contributor.initiative`
|
||||
|
||||
## 2021-07-06
|
||||
|
||||
- Atmire merged my spider user agent changes from last month so I will update the `example` list we use in DSpace and remove the new ones from my `ilri` override file
|
||||
- Also, I concatenated all our user agents into one file and purged all hits:
|
||||
|
||||
```console
|
||||
$ ./ilri/check-spider-hits.sh -f /tmp/spiders -p
|
||||
Purging 95 hits from Drupal in statistics
|
||||
Purging 38 hits from DTS Agent in statistics
|
||||
Purging 601 hits from Microsoft Office Existence Discovery in statistics
|
||||
Purging 51 hits from Site24x7 in statistics
|
||||
Purging 62 hits from Trello in statistics
|
||||
Purging 13574 hits from WhatsApp in statistics
|
||||
Purging 144 hits from FlipboardProxy in statistics
|
||||
Purging 37 hits from LinkWalker in statistics
|
||||
Purging 1 hits from [Ll]ink.?[Cc]heck.? in statistics
|
||||
Purging 427 hits from WordPress in statistics
|
||||
|
||||
Total number of bot hits purged: 15030
|
||||
```
|
||||
|
||||
- Meet with the CGIAR–AGROVOC task group to discuss how we want to do the workflow for submitting new terms to AGROVOC
|
||||
- I extracted another list of all subjects to check against AGROVOC:
|
||||
|
||||
```console
|
||||
\COPY (SELECT DISTINCT(LOWER(text_value)) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242, 187) GROUP BY subject ORDER BY count DESC) to /tmp/2021-07-06-all-subjects.csv WITH CSV HEADER;
|
||||
$ csvcut -c 1 /tmp/2021-07-06-all-subjects.csv | sed 1d > /tmp/2021-07-06-all-subjects.txt
|
||||
$ ./ilri/agrovoc-lookup.py -i /tmp/2021-07-06-all-subjects.txt -o /tmp/2021-07-06-agrovoc-results-all-subjects.csv -d
|
||||
```
|
||||
|
||||
- Test [Hrafn Malmquist's proposed DBCP2 changes](https://github.com/DSpace/DSpace/pull/3162) for DSpace 6.4 (DS-4574)
|
||||
- His changes reminded me that we can perhaps switch back to using this pooling instead of Tomcat 7's JDBC pooling via JNDI
|
||||
- Tomcat 8 has DBCP2 built in, but we are stuck on Tomcat 7 for now
|
||||
- Looking into the database issues we had last month on 2021-06-23
|
||||
- I think it might have been some kind of attack because the number of XMLUI sessions was through the roof at one point (10,000!) and the number of unique IPs accessing the server that day is much higher than any other day:
|
||||
|
||||
```console
|
||||
# for num in {10..26}; do echo "2021-06-$num"; zcat /var/log/nginx/access.log.*.gz /var/log/nginx/library-access.log.*.gz | grep "$num/Jun/2021" | awk '{print $1}' | sort | uniq | wc -l; done
|
||||
2021-06-10
|
||||
10693
|
||||
2021-06-11
|
||||
10587
|
||||
2021-06-12
|
||||
7958
|
||||
2021-06-13
|
||||
7681
|
||||
2021-06-14
|
||||
12639
|
||||
2021-06-15
|
||||
15388
|
||||
2021-06-16
|
||||
12245
|
||||
2021-06-17
|
||||
11187
|
||||
2021-06-18
|
||||
9684
|
||||
2021-06-19
|
||||
7835
|
||||
2021-06-20
|
||||
7198
|
||||
2021-06-21
|
||||
10380
|
||||
2021-06-22
|
||||
10255
|
||||
2021-06-23
|
||||
15878
|
||||
2021-06-24
|
||||
9963
|
||||
2021-06-25
|
||||
9439
|
||||
2021-06-26
|
||||
7930
|
||||
```
|
||||
|
||||
- Similarly, the number of connections to the REST API was around the average for the recent weeks before:
|
||||
|
||||
```console
|
||||
# for num in {10..26}; do echo "2021-06-$num"; zcat /var/log/nginx/rest.*.gz | grep "$num/Jun/2021" | awk '{print $1}' | sort | uniq | wc -l; done
|
||||
2021-06-10
|
||||
1183
|
||||
2021-06-11
|
||||
1074
|
||||
2021-06-12
|
||||
911
|
||||
2021-06-13
|
||||
892
|
||||
2021-06-14
|
||||
1320
|
||||
2021-06-15
|
||||
1257
|
||||
2021-06-16
|
||||
1208
|
||||
2021-06-17
|
||||
1119
|
||||
2021-06-18
|
||||
965
|
||||
2021-06-19
|
||||
985
|
||||
2021-06-20
|
||||
854
|
||||
2021-06-21
|
||||
1098
|
||||
2021-06-22
|
||||
1028
|
||||
2021-06-23
|
||||
1375
|
||||
2021-06-24
|
||||
1135
|
||||
2021-06-25
|
||||
969
|
||||
2021-06-26
|
||||
904
|
||||
```
|
||||
|
||||
- According to goaccess, the traffic spike started at 2AM (remember that the first "Pool empty" error in dspace.log was at 4:01AM):
|
||||
|
||||
```console
|
||||
# zcat /var/log/nginx/access.log.1[45].gz /var/log/nginx/library-access.log.1[45].gz | grep -E '23/Jun/2021' | goaccess --log-format=COMBINED -
|
||||
```
|
||||
|
||||
- Moayad sent a fix for the add missing items plugins issue ([#107](https://github.com/ilri/OpenRXV/pull/107))
|
||||
- It works MUCH faster because it correctly identifies the missing handles in each repository
|
||||
- Also it adds better debug messages to the api logs
|
||||
|
||||
<!-- vim: set sw=2 ts=2: -->
|
@ -34,7 +34,7 @@ Last week I had increased the limit from 30 to 60, which seemed to help, but now
|
||||
$ psql -c 'SELECT * from pg_stat_activity;' | grep idle | grep -c cgspace
|
||||
78
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -36,7 +36,7 @@ Replace lzop with xz in log compression cron jobs on DSpace Test—it uses less
|
||||
-rw-rw-r-- 1 tomcat7 tomcat7 387K Nov 18 23:59 dspace.log.2015-11-18.lzo
|
||||
-rw-rw-r-- 1 tomcat7 tomcat7 169K Nov 18 23:59 dspace.log.2015-11-18.xz
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -28,7 +28,7 @@ Move ILRI collection 10568/12503 from 10568/27869 to 10568/27629 using the move_
|
||||
I realized it is only necessary to clear the Cocoon cache after moving collections—rather than reindexing—as no metadata has changed, and therefore no search or browse indexes need to be updated.
|
||||
Update GitHub wiki for documentation of maintenance tasks.
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -38,7 +38,7 @@ I noticed we have a very interesting list of countries on CGSpace:
|
||||
Not only are there 49,000 countries, we have some blanks (25)…
|
||||
Also, lots of things like “COTE D`LVOIRE” and “COTE D IVOIRE”
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -28,7 +28,7 @@ Looking at issues with author authorities on CGSpace
|
||||
For some reason we still have the index-lucene-update cron job active on CGSpace, but I’m pretty sure we don’t need it as of the latest few versions of Atmire’s Listings and Reports module
|
||||
Reinstall my local (Mac OS X) DSpace stack with Tomcat 7, PostgreSQL 9.3, and Java JDK 1.7 to match environment on CGSpace server
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -32,7 +32,7 @@ After running DSpace for over five years I’ve never needed to look in any
|
||||
This will save us a few gigs of backup space we’re paying for on S3
|
||||
Also, I noticed the checker log has some errors we should pay attention to:
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -34,7 +34,7 @@ There are 3,000 IPs accessing the REST API in a 24-hour period!
|
||||
# awk '{print $1}' /var/log/nginx/rest.log | uniq | wc -l
|
||||
3168
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -34,7 +34,7 @@ This is their publications set: http://ebrary.ifpri.org/oai/oai.php?verb=ListRec
|
||||
You can see the others by using the OAI ListSets verb: http://ebrary.ifpri.org/oai/oai.php?verb=ListSets
|
||||
Working on second phase of metadata migration, looks like this will work for moving CPWF-specific data in dc.identifier.fund to cg.identifier.cpwfproject and then the rest to dc.description.sponsorship
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -44,7 +44,7 @@ dspacetest=# select text_value from metadatavalue where metadata_field_id=3 and
|
||||
|
||||
In this case the select query was showing 95 results before the update
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -42,7 +42,7 @@ $ git checkout -b 55new 5_x-prod
|
||||
$ git reset --hard ilri/5_x-prod
|
||||
$ git rebase -i dspace-5.5
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -34,7 +34,7 @@ It looks like we might be able to use OUs now, instead of DCs:
|
||||
|
||||
$ ldapsearch -x -H ldaps://svcgroot2.cgiarad.org:3269/ -b "dc=cgiarad,dc=org" -D "admigration1@cgiarad.org" -W "(sAMAccountName=admigration1)"
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -42,7 +42,7 @@ I exported a random item’s metadata as CSV, deleted all columns except id
|
||||
|
||||
0000-0002-6115-0956||0000-0002-3812-8793||0000-0001-7462-405X
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -26,7 +26,7 @@ Add dc.type to the output options for Atmire’s Listings and Reports module
|
||||
Add dc.type to the output options for Atmire’s Listings and Reports module (#286)
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -46,7 +46,7 @@ I see thousands of them in the logs for the last few months, so it’s not r
|
||||
I’ve raised a ticket with Atmire to ask
|
||||
Another worrying error from dspace.log is:
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -28,7 +28,7 @@ I checked to see if the Solr sharding task that is supposed to run on January 1s
|
||||
I tested on DSpace Test as well and it doesn’t work there either
|
||||
I asked on the dspace-tech mailing list because it seems to be broken, and actually now I’m not sure if we’ve ever had the sharding task run successfully over all these years
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -50,7 +50,7 @@ DELETE 1
|
||||
Create issue on GitHub to track the addition of CCAFS Phase II project tags (#301)
|
||||
Looks like we’ll be using cg.identifier.ccafsprojectpii as the field name
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -54,7 +54,7 @@ Interestingly, it seems DSpace 4.x’s thumbnails were sRGB, but forcing reg
|
||||
$ identify ~/Desktop/alc_contrastes_desafios.jpg
|
||||
/Users/aorth/Desktop/alc_contrastes_desafios.jpg JPEG 464x600 464x600+0+0 8-bit CMYK 168KB 0.000u 0:00.000
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -40,7 +40,7 @@ Testing the CMYK patch on a collection with 650 items:
|
||||
|
||||
$ [dspace]/bin/dspace filter-media -f -i 10568/16498 -p "ImageMagick PDF Thumbnail" -v >& /tmp/filter-media-cmyk.txt
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -18,7 +18,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="May, 2017"/>
|
||||
<meta name="twitter:description" content="2017-05-01 ICARDA apparently started working on CG Core on their MEL repository They have done a few cg.* fields, but not very consistent and even copy some of CGSpace items: https://mel.cgiar.org/xmlui/handle/20.500.11766/6911?show=full https://cgspace.cgiar.org/handle/10568/73683 2017-05-02 Atmire got back about the Workflow Statistics issue, and apparently it’s a bug in the CUA module so they will send us a pull request 2017-05-04 Sync DSpace Test with database and assetstore from CGSpace Re-deploy DSpace Test with Atmire’s CUA patch for workflow statistics, run system updates, and restart the server Now I can see the workflow statistics and am able to select users, but everything returns 0 items Megan says there are still some mapped items are not appearing since last week, so I forced a full index-discovery -b Need to remember to check if the collection has more items (currently 39 on CGSpace, but 118 on the freshly reindexed DSPace Test) tomorrow: https://cgspace."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -18,7 +18,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="June, 2017"/>
|
||||
<meta name="twitter:description" content="2017-06-01 After discussion with WLE and CGSpace content people, we decided to just add one metadata field for the WLE Research Themes The cg.identifier.wletheme field will be used for both Phase I and Phase II Research Themes Then we’ll create a new sub-community for Phase II and create collections for the research themes there The current “Research Themes” community will be renamed to “WLE Phase I Research Themes” Tagged all items in the current Phase I collections with their appropriate themes Create pull request to add Phase II research themes to the submission form: #328 Add cg."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -36,7 +36,7 @@ Merge changes for WLE Phase II theme rename (#329)
|
||||
Looking at extracting the metadata registries from ICARDA’s MEL DSpace database so we can compare fields with CGSpace
|
||||
We can use PostgreSQL’s extended output format (-x) plus sed to format the output into quasi XML:
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -60,7 +60,7 @@ This was due to newline characters in the dc.description.abstract column, which
|
||||
I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using g/^$/d
|
||||
Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -32,7 +32,7 @@ Linode sent an alert that CGSpace (linode18) was using 261% CPU for the past two
|
||||
|
||||
Ask Sisay to clean up the WLE approvers a bit, as Marianne’s user account is both in the approvers step as well as the group
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -34,7 +34,7 @@ http://hdl.handle.net/10568/78495||http://hdl.handle.net/10568/79336
|
||||
There appears to be a pattern but I’ll have to look a bit closer and try to clean them up automatically, either in SQL or in OpenRefine
|
||||
Add Katherine Lutz to the groups for content submission and edit steps of the CGIAR System collections
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -48,7 +48,7 @@ Generate list of authors on CGSpace for Peter to go through and correct:
|
||||
dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors.csv with csv;
|
||||
COPY 54701
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -30,7 +30,7 @@ The logs say “Timeout waiting for idle object”
|
||||
PostgreSQL activity says there are 115 connections currently
|
||||
The list of connections to XMLUI and REST API for today:
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -150,7 +150,7 @@ dspace.log.2018-01-02:34
|
||||
|
||||
Danny wrote to ask for help renewing the wildcard ilri.org certificate and I advised that we should probably use Let’s Encrypt if it’s just a handful of domains
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -30,7 +30,7 @@ We don’t need to distinguish between internal and external works, so that
|
||||
Yesterday I figured out how to monitor DSpace sessions using JMX
|
||||
I copied the logic in the jmx_tomcat_dbpools provided by Ubuntu’s munin-plugins-java package and used the stuff I discovered about JMX in 2018-01
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -24,7 +24,7 @@ Export a CSV of the IITA community metadata for Martin Mueller
|
||||
|
||||
Export a CSV of the IITA community metadata for Martin Mueller
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -26,7 +26,7 @@ Catalina logs at least show some memory errors yesterday:
|
||||
I tried to test something on DSpace Test but noticed that it’s down since god knows when
|
||||
Catalina logs at least show some memory errors yesterday:
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -38,7 +38,7 @@ http://localhost:3000/solr/statistics/update?stream.body=%3Ccommit/%3E
|
||||
Then I reduced the JVM heap size from 6144 back to 5120m
|
||||
Also, I switched it to use OpenJDK instead of Oracle Java, as well as re-worked the Ansible infrastructure scripts to support hosts choosing which distribution they want to use
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -58,7 +58,7 @@ real 74m42.646s
|
||||
user 8m5.056s
|
||||
sys 2m7.289s
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -36,7 +36,7 @@ During the mvn package stage on the 5.8 branch I kept getting issues with java r
|
||||
|
||||
There is insufficient memory for the Java Runtime Environment to continue.
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -46,7 +46,7 @@ Anyways, perhaps I should increase the JVM heap from 5120m to 6144m like we did
|
||||
The server only has 8GB of RAM so we’ll eventually need to upgrade to a larger one because we’ll start starving the OS, PostgreSQL, and command line batch processes
|
||||
I ran all system updates on DSpace Test and rebooted it
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -30,7 +30,7 @@ I’ll update the DSpace role in our Ansible infrastructure playbooks and ru
|
||||
Also, I’ll re-run the postgresql tasks because the custom PostgreSQL variables are dynamic according to the system’s RAM, and we never re-ran them after migrating to larger Linodes last month
|
||||
I’m testing the new DSpace 5.8 branch in my Ubuntu 18.04 environment and I’m getting those autowire errors in Tomcat 8.5.30 again:
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -26,7 +26,7 @@ I created a GitHub issue to track this #389, because I’m super busy in Nai
|
||||
Phil Thornton got an ORCID identifier so we need to add it to the list on CGSpace and tag his existing items
|
||||
I created a GitHub issue to track this #389, because I’m super busy in Nairobi right now
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -36,7 +36,7 @@ Send a note about my dspace-statistics-api to the dspace-tech mailing list
|
||||
Linode has been sending mails a few times a day recently that CGSpace (linode18) has had high CPU usage
|
||||
Today these are the top 10 IPs:
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -36,7 +36,7 @@ Then I ran all system updates and restarted the server
|
||||
|
||||
I noticed that there is another issue with PDF thumbnails on CGSpace, and I see there was another Ghostscript vulnerability last week
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -50,7 +50,7 @@ I don’t see anything interesting in the web server logs around that time t
|
||||
357 207.46.13.1
|
||||
903 54.70.40.11
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -72,7 +72,7 @@ real 0m19.873s
|
||||
user 0m22.203s
|
||||
sys 0m1.979s
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -46,7 +46,7 @@ Most worryingly, there are encoding errors in the abstracts for eleven items, fo
|
||||
|
||||
I think I will need to ask Udana to re-copy and paste the abstracts with more care using Google Docs
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -64,7 +64,7 @@ $ ./fix-metadata-values.py -i /tmp/2019-02-21-fix-4-regions.csv -db dspace -u ds
|
||||
$ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-2-countries.csv -db dspace -u dspace -p 'fuuu' -m 228 -f cg.coverage.country -d
|
||||
$ ./delete-metadata-values.py -i /tmp/2019-02-21-delete-1-region.csv -db dspace -u dspace -p 'fuuu' -m 231 -f cg.coverage.region -d
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -48,7 +48,7 @@ DELETE 1
|
||||
|
||||
But after this I tried to delete the item from the XMLUI and it is still present…
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -34,7 +34,7 @@ Run system updates on CGSpace (linode18) and reboot it
|
||||
|
||||
Skype with Marie-Angélique and Abenet about CG Core v2
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -38,7 +38,7 @@ CGSpace
|
||||
|
||||
Abenet had another similar issue a few days ago when trying to find the stats for 2018 in the RTB community
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -46,7 +46,7 @@ After rebooting, all statistics cores were loaded… wow, that’s luck
|
||||
|
||||
Run system updates on DSpace Test (linode19) and reboot it
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -72,7 +72,7 @@ Here are the top ten IPs in the nginx XMLUI and REST/OAI logs this morning:
|
||||
7249 2a01:7e00::f03c:91ff:fe18:7396
|
||||
9124 45.5.186.2
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -18,7 +18,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="October, 2019"/>
|
||||
<meta name="twitter:description" content="2019-10-01 Udana from IWMI asked me for a CSV export of their community on CGSpace I exported it, but a quick run through the csv-metadata-quality tool shows that there are some low-hanging fruits we can fix before I send him the data I will limit the scope to the titles, regions, subregions, and river basins for now to manually fix some non-breaking spaces (U+00A0) there that would otherwise be removed by the csv-metadata-quality script’s “unneccesary Unicode” fix: $ csvcut -c 'id,dc."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -58,7 +58,7 @@ Let’s see how many of the REST API requests were for bitstreams (because t
|
||||
# zcat --force /var/log/nginx/rest.log.*.gz | grep -E "[0-9]{1,2}/Oct/2019" | grep -c -E "/rest/bitstreams"
|
||||
106781
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -46,7 +46,7 @@ Make sure all packages are up to date and the package manager is up to date, the
|
||||
# dpkg -C
|
||||
# reboot
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -56,7 +56,7 @@ I tweeted the CGSpace repository link
|
||||
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -38,7 +38,7 @@ The code finally builds and runs with a fresh install
|
||||
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -42,7 +42,7 @@ You need to download this into the DSpace 6.x source and compile it
|
||||
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -48,7 +48,7 @@ The third item now has a donut with score 1 since I tweeted it last week
|
||||
|
||||
On the same note, the one item Abenet pointed out last week now has a donut with score of 104 after I tweeted it last week
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -34,7 +34,7 @@ I see that CGSpace (linode18) is still using PostgreSQL JDBC driver version 42.2
|
||||
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -36,7 +36,7 @@ I sent Atmire the dspace.log from today and told them to log into the server to
|
||||
In other news, I checked the statistics API on DSpace 6 and it’s working
|
||||
I tried to build the OAI registry on the freshly migrated DSpace 6 on DSpace Test and I get an error:
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -38,7 +38,7 @@ I restarted Tomcat and PostgreSQL and the issue was gone
|
||||
|
||||
Since I was restarting Tomcat anyways I decided to redeploy the latest changes from the 5_x-prod branch and I added a note about COVID-19 items to the CGSpace frontpage at Peter’s request
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -36,7 +36,7 @@ It is class based so I can easily add support for other vocabularies, and the te
|
||||
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -48,7 +48,7 @@ I filed a bug on OpenRXV: https://github.com/ilri/OpenRXV/issues/39
|
||||
|
||||
I filed an issue on OpenRXV to make some minor edits to the admin UI: https://github.com/ilri/OpenRXV/issues/40
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -44,7 +44,7 @@ During the FlywayDB migration I got an error:
|
||||
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -32,7 +32,7 @@ So far we’ve spent at least fifty hours to process the statistics and stat
|
||||
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -36,7 +36,7 @@ I started processing those (about 411,000 records):
|
||||
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -50,7 +50,7 @@ For example, this item has 51 views on CGSpace, but 0 on AReS
|
||||
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -60,7 +60,7 @@ $ curl -s 'http://localhost:9200/openrxv-items-temp/_count?q=*&pretty
|
||||
}
|
||||
}
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -34,7 +34,7 @@ Also, we found some issues building and running OpenRXV currently due to ecosyst
|
||||
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -44,7 +44,7 @@ Perhaps one of the containers crashed, I should have looked closer but I was in
|
||||
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -36,7 +36,7 @@ I looked at the top user agents and IPs in the Solr statistics for last month an
|
||||
|
||||
I will add the RI/1.0 pattern to our DSpace agents overload and purge them from Solr (we had previously seen this agent with 9,000 hits or so in 2020-09), but I think I will leave the Microsoft Word one… as that’s an actual user…
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -6,37 +6,31 @@
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
|
||||
|
||||
|
||||
<meta property="og:title" content="June, 2021" />
|
||||
<meta property="og:description" content="2021-06-01
|
||||
|
||||
IWMI notified me that AReS was down with an HTTP 502 error
|
||||
|
||||
Looking at UptimeRobot I see it has been down for 33 hours, but I never got a notification
|
||||
I don’t see anything in the Elasticsearch container logs, or the systemd journal on the host, but I notice that the angular_nginx container isn’t running
|
||||
I simply started it and AReS was running again:
|
||||
<meta property="og:title" content="July, 2021" />
|
||||
<meta property="og:description" content="2021-07-01
|
||||
|
||||
Export another list of ALL subjects on CGSpace, including AGROVOC and non-AGROVOC for Enrico:
|
||||
|
||||
localhost/dspace63= > \COPY (SELECT DISTINCT LOWER(text_value) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242, 187) GROUP BY subject ORDER BY count DESC) to /tmp/2021-07-01-all-subjects.csv WITH CSV HEADER;
|
||||
COPY 20994
|
||||
" />
|
||||
<meta property="og:type" content="article" />
|
||||
<meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2021-06/" />
|
||||
<meta property="article:published_time" content="2021-06-01T10:51:07+03:00" />
|
||||
<meta property="article:modified_time" content="2021-07-01T08:53:21+03:00" />
|
||||
<meta property="article:published_time" content="2021-06-01T08:53:07+03:00" />
|
||||
<meta property="article:modified_time" content="2021-06-01T08:53:07+03:00" />
|
||||
|
||||
|
||||
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="June, 2021"/>
|
||||
<meta name="twitter:description" content="2021-06-01
|
||||
|
||||
IWMI notified me that AReS was down with an HTTP 502 error
|
||||
|
||||
Looking at UptimeRobot I see it has been down for 33 hours, but I never got a notification
|
||||
I don’t see anything in the Elasticsearch container logs, or the systemd journal on the host, but I notice that the angular_nginx container isn’t running
|
||||
I simply started it and AReS was running again:
|
||||
<meta name="twitter:title" content="July, 2021"/>
|
||||
<meta name="twitter:description" content="2021-07-01
|
||||
|
||||
Export another list of ALL subjects on CGSpace, including AGROVOC and non-AGROVOC for Enrico:
|
||||
|
||||
localhost/dspace63= > \COPY (SELECT DISTINCT LOWER(text_value) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242, 187) GROUP BY subject ORDER BY count DESC) to /tmp/2021-07-01-all-subjects.csv WITH CSV HEADER;
|
||||
COPY 20994
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
@ -44,11 +38,11 @@ I simply started it and AReS was running again:
|
||||
{
|
||||
"@context": "http://schema.org",
|
||||
"@type": "BlogPosting",
|
||||
"headline": "June, 2021",
|
||||
"headline": "July, 2021",
|
||||
"url": "https://alanorth.github.io/cgspace-notes/2021-06/",
|
||||
"wordCount": "3505",
|
||||
"datePublished": "2021-06-01T10:51:07+03:00",
|
||||
"dateModified": "2021-07-01T08:53:21+03:00",
|
||||
"wordCount": "873",
|
||||
"datePublished": "2021-06-01T08:53:07+03:00",
|
||||
"dateModified": "2021-06-01T08:53:07+03:00",
|
||||
"author": {
|
||||
"@type": "Person",
|
||||
"name": "Alan Orth"
|
||||
@ -61,7 +55,7 @@ I simply started it and AReS was running again:
|
||||
|
||||
<link rel="canonical" href="https://alanorth.github.io/cgspace-notes/2021-06/">
|
||||
|
||||
<title>June, 2021 | CGSpace Notes</title>
|
||||
<title>July, 2021 | CGSpace Notes</title>
|
||||
|
||||
|
||||
<!-- combined, minified CSS -->
|
||||
@ -113,564 +107,188 @@ I simply started it and AReS was running again:
|
||||
|
||||
<article class="blog-post">
|
||||
<header>
|
||||
<h2 class="blog-post-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/2021-06/">June, 2021</a></h2>
|
||||
<h2 class="blog-post-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/2021-06/">July, 2021</a></h2>
|
||||
<p class="blog-post-meta">
|
||||
<time datetime="2021-06-01T10:51:07+03:00">Tue Jun 01, 2021</time>
|
||||
<time datetime="2021-06-01T08:53:07+03:00">Tue Jun 01, 2021</time>
|
||||
in
|
||||
<span class="fas fa-folder" aria-hidden="true"></span> <a href="/cgspace-notes/categories/notes/" rel="category tag">Notes</a>
|
||||
|
||||
|
||||
</p>
|
||||
</header>
|
||||
<h2 id="2021-06-01">2021-06-01</h2>
|
||||
<h2 id="2021-07-01">2021-07-01</h2>
|
||||
<ul>
|
||||
<li>IWMI notified me that AReS was down with an HTTP 502 error
|
||||
<li>Export another list of ALL subjects on CGSpace, including AGROVOC and non-AGROVOC for Enrico:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">localhost/dspace63= > \COPY (SELECT DISTINCT LOWER(text_value) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242, 187) GROUP BY subject ORDER BY count DESC) to /tmp/2021-07-01-all-subjects.csv WITH CSV HEADER;
|
||||
COPY 20994
|
||||
</code></pre><h2 id="2021-07-04">2021-07-04</h2>
|
||||
<ul>
|
||||
<li>Looking at UptimeRobot I see it has been down for 33 hours, but I never got a notification</li>
|
||||
<li>I don’t see anything in the Elasticsearch container logs, or the systemd journal on the host, but I notice that the <code>angular_nginx</code> container isn’t running</li>
|
||||
<li>I simply started it and AReS was running again:</li>
|
||||
<li>Update all Docker containers on the AReS server (linode20) and rebuild OpenRXV:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ docker-compose -f docker/docker-compose.yml start angular_nginx
|
||||
<pre><code class="language-console" data-lang="console">$ cd OpenRXV
|
||||
$ docker-compose -f docker/docker-compose.yml down
|
||||
$ docker images | grep -v ^REPO | sed 's/ \+/:/g' | cut -d: -f1,2 | xargs -L1 docker pull
|
||||
$ docker-compose -f docker/docker-compose.yml build
|
||||
</code></pre><ul>
|
||||
<li>Margarita from CCAFS emailed me to say that workflow alerts haven’t been working lately
|
||||
<li>Then run all system updates and reboot the server</li>
|
||||
<li>After the server came back up I cloned the <code>openrxv-items-final</code> index to <code>openrxv-items-temp</code> and started the plugins
|
||||
<ul>
|
||||
<li>I guess this is related to the SMTP issues last week</li>
|
||||
<li>I had fixed the config, but didn’t restart Tomcat so DSpace didn’t load the new variables</li>
|
||||
<li>I ran all system updates on CGSpace (linode18) and DSpace Test (linode26) and rebooted the servers</li>
|
||||
<li>This will hopefully be faster than a full re-harvest…</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>I opened a few GitHub issues for OpenRXV bugs:
|
||||
<ul>
|
||||
<li><a href="https://github.com/ilri/OpenRXV/issues/103">Hide “metadata structure” section in repository setup</a></li>
|
||||
<li><a href="https://github.com/ilri/OpenRXV/issues/104">Improve “start plugins” and “commit indexing” buttons</a></li>
|
||||
<li><a href="https://github.com/ilri/OpenRXV/issues/105">Allow running plugins individually</a></li>
|
||||
<li><a href="https://github.com/ilri/OpenRXV/issues/106">Hide the “DSpace add missing items”</a></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>Rebuild DSpace Test (linode26) from a fresh Ubuntu 20.04 image on Linode</li>
|
||||
<li>The start plugins on AReS had seventy-five errors from the <code>dspace_add_missing_items</code> plugin for some reason so I had to start a fresh indexing</li>
|
||||
<li>I noticed that the WorldFish data has dozens of incorrect countries so I should talk to Salem about that because they manage it
|
||||
<ul>
|
||||
<li>Also I noticed that we weren’t using the Country formatter in OpenRXV for the WorldFish country field, so some values don’t get mapped properly</li>
|
||||
<li>I added some value mappings to fix some issues with WorldFish data and added a few more fields to the repository harvesting config and started a fresh re-indexing</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-03">2021-06-03</h2>
|
||||
<h2 id="2021-07-05">2021-07-05</h2>
|
||||
<ul>
|
||||
<li>Meeting with AMCOW and IWMI to discuss AMCOW getting IWMI’s content into the new AMCOW Knowledge Hub
|
||||
<li>The AReS harvesting last night succeeded and I started the plugins</li>
|
||||
<li>Margarita from CCAFS asked if we can create a new field for AICCRA publications
|
||||
<ul>
|
||||
<li>At first we spent some time talking about DSpace communities/collections and the REST API, but then they said they actually prefer to send queries to sites on the fly and cache them in Redis for some time</li>
|
||||
<li>That’s when I thought they could perhaps use the OpenSearch, but I can’t remember if it’s possible to limit by community, or only collection…</li>
|
||||
<li>Looking now, I see there is a “scope” parameter that can be used for community or collection, for example:</li>
|
||||
<li>I asked her to clarify what they want</li>
|
||||
<li>AICCRA is an initiative so it might be better to create new field for that, for example <code>cg.contributor.initiative</code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code>https://cgspace.cgiar.org/open-search/discover?query=subject:water%20scarcity&scope=10568/16814&order=DESC&rpp=100&sort_by=2&start=1
|
||||
</code></pre><ul>
|
||||
<li>That will sort by date issued (see: <code>webui.itemlist.sort-option.2</code> in dspace.cfg), give 100 results per page, and start on item 1</li>
|
||||
<li>Otherwise, another alternative would be to use the IWMI CSV that we are already exporting every week</li>
|
||||
<li>Fill out the <em>CGIAR-AGROVOC Task Group: Survey on the current CGIAR use of AGROVOC</em> survey on behalf of CGSpace</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-06">2021-06-06</h2>
|
||||
<h2 id="2021-07-06">2021-07-06</h2>
|
||||
<ul>
|
||||
<li>The Elasticsearch indexes are messed up so I dumped and re-created them correctly:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">curl -XDELETE 'http://localhost:9200/openrxv-items-final'
|
||||
curl -XDELETE 'http://localhost:9200/openrxv-items-temp'
|
||||
curl -XPUT 'http://localhost:9200/openrxv-items-final'
|
||||
curl -XPUT 'http://localhost:9200/openrxv-items-temp'
|
||||
curl -s -X POST 'http://localhost:9200/_aliases' -H 'Content-Type: application/json' -d'{"actions" : [{"add" : { "index" : "openrxv-items-final", "alias" : "openrxv-items"}}]}'
|
||||
elasticdump --input=/home/aorth/openrxv-items_mapping.json --output=http://localhost:9200/openrxv-items-final --type=mapping
|
||||
elasticdump --input=/home/aorth/openrxv-items_data.json --output=http://localhost:9200/openrxv-items-final --type=data --limit=1000
|
||||
</code></pre><ul>
|
||||
<li>Then I started a harvesting on AReS</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-07">2021-06-07</h2>
|
||||
<li>Atmire merged my spider user agent changes from last month so I will update the <code>example</code> list we use in DSpace and remove the new ones from my <code>ilri</code> override file
|
||||
<ul>
|
||||
<li>The harvesting on AReS completed successfully</li>
|
||||
<li>Provide feedback to FAO on how we use AGROVOC for their “AGROVOC call for use cases”</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-10">2021-06-10</h2>
|
||||
<ul>
|
||||
<li>Skype with Moayad to discuss AReS harvesting improvements
|
||||
<ul>
|
||||
<li>He will work on a plugin that reads the XML sitemap to get all item IDs and checks whether we have them or not</li>
|
||||
<li>Also, I concatenated all our user agents into one file and purged all hits:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-14">2021-06-14</h2>
|
||||
<ul>
|
||||
<li>Dump and re-create indexes on AReS (as above) so I can do a harvest</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-16">2021-06-16</h2>
|
||||
<ul>
|
||||
<li>Looking at the Solr statistics on CGSpace for last month I see many requests from hosts using seemingly normal Windows browser user agents, but using the MSN bot’s DNS
|
||||
<ul>
|
||||
<li>For example, user agent <code>Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0; Trident/5.0)</code> with DNS <code>msnbot-131-253-25-91.search.msn.com.</code></li>
|
||||
<li>I queried Solr for all hits using the MSN bot DNS (<code>dns:*msnbot* AND dns:*.msn.com.</code>) and found 457,706</li>
|
||||
<li>I extracted their IPs using Solr’s CSV format and ran them through my <code>resolve-addresses.py</code> script and found that they all belong to MICROSOFT-CORP-MSN-AS-BLOCK (AS8075)</li>
|
||||
<li>Note that <a href="https://www.bing.com/webmasters/help/how-to-verify-bingbot-3905dc26">Microsoft’s docs say that reverse lookups on Bingbot IPs will always have “search.msn.com”</a> so it is safe to purge these as non-human traffic</li>
|
||||
<li>I purged the hits with <code>ilri/check-spider-ip-hits.sh</code> (though I had to do it in 3 batches because I forgot to increase the <code>facet.limit</code> so I was only getting them 100 at a time)</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>Moayad sent a pull request a few days ago to re-work the harvesting on OpenRXV
|
||||
<ul>
|
||||
<li>It will hopefully also fix the duplicate and missing items issues</li>
|
||||
<li>I had a Skype with him to discuss</li>
|
||||
<li>I got it running on podman-compose, but I had to fix the storage permissions on the Elasticsearch volume after the first time it tries (and fails) to run:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ podman unshare chown 1000:1000 /home/aorth/.local/share/containers/storage/volumes/docker_esData_7/_data
|
||||
</code></pre><ul>
|
||||
<li>The new OpenRXV harvesting method by Moayad uses pages of 10 items instead of 100 and it’s much faster
|
||||
<ul>
|
||||
<li>I harvested 90,000+ items from DSpace Test in ~3 hours</li>
|
||||
<li>There seem to be some issues with the health check step though, as I see it is requesting one restricted item 600,000+ times…</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-17">2021-06-17</h2>
|
||||
<ul>
|
||||
<li>I ported my ilri/resolve-addresses.py script that uses IPAPI.co to use the local GeoIP2 databases
|
||||
<ul>
|
||||
<li>The new script is ilri/resolve-addresses-geoip2.py and it is much faster and works offline with no API rate limits</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>Teams meeting with the CGIAR Metadata Working group to discuss CGSpace and open repositories and the way forward</li>
|
||||
<li>More work with Moayad on OpenRXV harvesting issues
|
||||
<ul>
|
||||
<li>Using a JSON export from elasticdump we debugged the duplicate checker plugin and found that there are indeed duplicates:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-items_data.json | awk -F: '{print $2}' | wc -l
|
||||
90459
|
||||
$ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-items_data.json | awk -F: '{print $2}' | sort | uniq | wc -l
|
||||
90380
|
||||
$ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-items_data.json | awk -F: '{print $2}' | sort | uniq -c | sort -h
|
||||
...
|
||||
2 "10568/99409"
|
||||
2 "10568/99410"
|
||||
2 "10568/99411"
|
||||
2 "10568/99516"
|
||||
3 "10568/102093"
|
||||
3 "10568/103524"
|
||||
3 "10568/106664"
|
||||
3 "10568/106940"
|
||||
3 "10568/107195"
|
||||
3 "10568/96546"
|
||||
</code></pre><h2 id="2021-06-20">2021-06-20</h2>
|
||||
<ul>
|
||||
<li>Udana asked me to update their IWMI subjects from <code>farmer managed irrigation systems</code> to <code>farmer-led irrigation</code>
|
||||
<ul>
|
||||
<li>First I extracted the IWMI community from CGSpace:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ dspace metadata-export -i 10568/16814 -f /tmp/2021-06-20-IWMI.csv
|
||||
</code></pre><ul>
|
||||
<li>Then I used <code>csvcut</code> to extract just the columns I needed and do the replacement into a new CSV:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ csvcut -c 'id,dcterms.subject[],dcterms.subject[en_US]' /tmp/2021-06-20-IWMI.csv | sed 's/farmer managed irrigation systems/farmer-led irrigation/' > /tmp/2021-06-20-IWMI-new-subjects.csv
|
||||
</code></pre><ul>
|
||||
<li>Then I uploaded the resulting CSV to CGSpace, updating 161 items</li>
|
||||
<li>Start a harvest on AReS</li>
|
||||
<li>I found <a href="https://jira.lyrasis.org/browse/DS-1977">a bug</a> and <a href="https://github.com/DSpace/DSpace/pull/2584">a patch</a> for the private items showing up in the DSpace sitemap bug
|
||||
<ul>
|
||||
<li>The fix is super simple, I should try to apply it</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-21">2021-06-21</h2>
|
||||
<ul>
|
||||
<li>The AReS harvesting finished, but the indexes got messed up again</li>
|
||||
<li>I was looking at the JSON export I made yesterday and trying to understand the situation with duplicates
|
||||
<ul>
|
||||
<li>We have 90,000+ items, but only 85,000 unique:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ grep -E '"repo":"CGSpace"' openrxv-items_data.json | grep -oE '"handle":"[[:digit:]]+/[[:alnum:]]+"' | wc -l
|
||||
90937
|
||||
$ grep -E '"repo":"CGSpace"' openrxv-items_data.json | grep -oE '"handle":"[[:digit:]]+/[[:alnum:]]+"' | sort -u | wc -l
|
||||
85709
|
||||
</code></pre><ul>
|
||||
<li>So those could be duplicates from the way we harvest pages, but they could also be from mappings…
|
||||
<ul>
|
||||
<li>Manually inspecting the duplicates where handles appear more than once:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ grep -E '"repo":"CGSpace"' openrxv-items_data.json | grep -oE '"handle":"[[:digit:]]+/[[:alnum:]]+"' | sort | uniq -c | sort -h
|
||||
</code></pre><ul>
|
||||
<li>Unfortunately I found no pattern:
|
||||
<ul>
|
||||
<li>Some appear twice in the Elasticsearch index, but appear in only one collection</li>
|
||||
<li>Some appear twice in the Elasticsearch index, and appear in <em>two</em> collections</li>
|
||||
<li>Some appear twice in the Elasticsearch index, but appear in three collections (!)</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>So really we need to just check whether a handle exists before we insert it</li>
|
||||
<li>I tested the <a href="https://github.com/DSpace/DSpace/pull/2584">pull request for DS-1977</a> that adjusts the sitemap generation code to exclude private items
|
||||
<ul>
|
||||
<li>It applies cleanly and seems to work, but we don’t actually have any private items</li>
|
||||
<li>The issue we are having with AReS hitting restricted items in the sitemap is that the items have restricted metadata, not that they are private</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>Testing the <a href="https://github.com/DSpace/DSpace/pull/2275">pull request for DS-4065</a> where the REST API’s <code>/rest/items</code> endpoint is not aware of private items and returns an incorrect number of items
|
||||
<ul>
|
||||
<li>This is most easily seen by setting a low limit in <code>/rest/items</code>, making one of the items private, and requesting items again with the same limit</li>
|
||||
<li>I confirmed the issue on the current DSpace 6 Demo:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ curl -s -H "Accept: application/json" "https://demo.dspace.org/rest/items?offset=0&limit=5" | jq length
|
||||
5
|
||||
$ curl -s -H "Accept: application/json" "https://demo.dspace.org/rest/items?offset=0&limit=5" | jq '.[].handle'
|
||||
"10673/4"
|
||||
"10673/3"
|
||||
"10673/6"
|
||||
"10673/5"
|
||||
"10673/7"
|
||||
# log into DSpace Demo XMLUI as admin and make one item private (for example 10673/6)
|
||||
$ curl -s -H "Accept: application/json" "https://demo.dspace.org/rest/items?offset=0&limit=5" | jq length
|
||||
4
|
||||
$ curl -s -H "Accept: application/json" "https://demo.dspace.org/rest/items?offset=0&limit=5" | jq '.[].handle'
|
||||
"10673/4"
|
||||
"10673/3"
|
||||
"10673/5"
|
||||
"10673/7"
|
||||
</code></pre><ul>
|
||||
<li>I tested the pull request on DSpace Test and it works, so I left a note on GitHub and Jira</li>
|
||||
<li>Last week I noticed that the Gender Platform website is using “cgspace.cgiar.org” links for CGSpace, instead of handles
|
||||
<ul>
|
||||
<li>I emailed Fabio and Marianne to ask them to please use the Handle links</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>I tested the <a href="https://github.com/DSpace/DSpace/pull/2543">pull request for DS-4271</a> where Discovery filters of type “contains” don’t work as expected when the user’s search term has spaces
|
||||
<ul>
|
||||
<li>I tested with filter “farmer managed irrigation systems” on DSpace Test</li>
|
||||
<li>Before the patch I got 293 results, and the few I checked didn’t have the expected metadata value</li>
|
||||
<li>After the patch I got 162 results, and all the items I checked had the exact metadata value I was expecting</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>I tested a fresh harvest from my local AReS on DSpace Test with the DS-4065 REST API patch and here are my results:
|
||||
<ul>
|
||||
<li>90459 in final from last harvesting</li>
|
||||
<li>90307 in temp after new harvest</li>
|
||||
<li>90327 in temp after start plugins</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>The 90327 number seems closer to the “real” number of items on CGSpace…
|
||||
<ul>
|
||||
<li>Seems close, but not entirely correct yet:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-items_data-local-ds-4065.json | wc -l
|
||||
90327
|
||||
$ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-items_data-local-ds-4065.json | sort -u | wc -l
|
||||
90317
|
||||
</code></pre><h2 id="2021-06-22">2021-06-22</h2>
|
||||
<ul>
|
||||
<li>Make a <a href="https://github.com/atmire/COUNTER-Robots/pull/43">pull request</a> to the COUNTER-Robots project to add two new user agents: crusty and newspaper
|
||||
<ul>
|
||||
<li>These two bots have made ~3,000 requests on CGSpace</li>
|
||||
<li>Then I added them to our local bot override in CGSpace (until the above pull request is merged) and ran my bot checking script:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ ./ilri/check-spider-hits.sh -f dspace/config/spiders/agents/ilri -p
|
||||
Purging 1339 hits from RI\/1\.0 in statistics
|
||||
Purging 447 hits from crusty in statistics
|
||||
Purging 3736 hits from newspaper in statistics
|
||||
<pre><code class="language-console" data-lang="console">$ ./ilri/check-spider-hits.sh -f /tmp/spiders -p
|
||||
Purging 95 hits from Drupal in statistics
|
||||
Purging 38 hits from DTS Agent in statistics
|
||||
Purging 601 hits from Microsoft Office Existence Discovery in statistics
|
||||
Purging 51 hits from Site24x7 in statistics
|
||||
Purging 62 hits from Trello in statistics
|
||||
Purging 13574 hits from WhatsApp in statistics
|
||||
Purging 144 hits from FlipboardProxy in statistics
|
||||
Purging 37 hits from LinkWalker in statistics
|
||||
Purging 1 hits from [Ll]ink.?[Cc]heck.? in statistics
|
||||
Purging 427 hits from WordPress in statistics
|
||||
|
||||
Total number of bot hits purged: 5522
|
||||
Total number of bot hits purged: 15030
|
||||
</code></pre><ul>
|
||||
<li>Surprised to see RI/1.0 in there because it’s been in the override file for a while</li>
|
||||
<li>Looking at the 2021 statistics in Solr I see a few more suspicious user agents:
|
||||
<ul>
|
||||
<li><code>PostmanRuntime/7.26.8</code></li>
|
||||
<li><code>node-fetch/1.0 (+https://github.com/bitinn/node-fetch)</code></li>
|
||||
<li><code>Photon/1.0</code></li>
|
||||
<li><code>StatusCake_Pagespeed_indev</code></li>
|
||||
<li><code>node-superagent/3.8.3</code></li>
|
||||
<li><code>cortex/1.0</code></li>
|
||||
<li>Meet with the CGIAR–AGROVOC task group to discuss how we want to do the workflow for submitting new terms to AGROVOC</li>
|
||||
<li>I extracted another list of all subjects to check against AGROVOC:</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>These bots account for ~42,000 hits in our statistics… I will just purge them and add them to our local override, but I can’t be bothered to submit them to COUNTER-Robots since I’d have to look up the information for each one</li>
|
||||
<li>I re-synced DSpace Test (linode26) with the assetstore, Solr statistics, and database from CGSpace (linode18)</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-23">2021-06-23</h2>
|
||||
<ul>
|
||||
<li>I woke up this morning to find CGSpace down
|
||||
<ul>
|
||||
<li>The logs show a high number of abandoned PostgreSQL connections and locks:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console"># journalctl --since=today -u tomcat7 | grep -c 'Connection has been abandoned'
|
||||
978
|
||||
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
||||
10100
|
||||
<pre><code class="language-console" data-lang="console">\COPY (SELECT DISTINCT(LOWER(text_value)) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242, 187) GROUP BY subject ORDER BY count DESC) to /tmp/2021-07-06-all-subjects.csv WITH CSV HEADER;
|
||||
$ csvcut -c 1 /tmp/2021-07-06-all-subjects.csv | sed 1d > /tmp/2021-07-06-all-subjects.txt
|
||||
$ ./ilri/agrovoc-lookup.py -i /tmp/2021-07-06-all-subjects.txt -o /tmp/2021-07-06-agrovoc-results-all-subjects.csv -d
|
||||
</code></pre><ul>
|
||||
<li>I sent a message to Atmire, hoping that the database logging stuff they put in place last time this happened will be of help now</li>
|
||||
<li>In the mean time, I decided to upgrade Tomcat from 7.0.107 to 7.0.109, and the PostgreSQL JDBC driver from 42.2.20 to 42.2.22 (first on DSpace Test)</li>
|
||||
<li>I also applied the following patches from the 6.4 milestone to our <code>6_x-prod</code> branch:
|
||||
<li>Test <a href="https://github.com/DSpace/DSpace/pull/3162">Hrafn Malmquist’s proposed DBCP2 changes</a> for DSpace 6.4 (DS-4574)
|
||||
<ul>
|
||||
<li>DS-4065: resource policy aware REST API hibernate queries</li>
|
||||
<li>DS-4271: Replaced brackets with double quotes in SolrServiceImpl</li>
|
||||
<li>His changes reminded me that we can perhaps switch back to using this pooling instead of Tomcat 7’s JDBC pooling via JNDI</li>
|
||||
<li>Tomcat 8 has DBCP2 built in, but we are stuck on Tomcat 7 for now</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>After upgrading and restarting Tomcat the database connections and locks were back down to normal levels:</li>
|
||||
<li>Looking into the database issues we had last month on 2021-06-23
|
||||
<ul>
|
||||
<li>I think it might have been some kind of attack because the number of XMLUI sessions was through the roof at one point (10,000!) and the number of unique IPs accessing the server that day is much higher than any other day:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
||||
63
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console"># for num in {10..26}; do echo "2021-06-$num"; zcat /var/log/nginx/access.log.*.gz /var/log/nginx/library-access.log.*.gz | grep "$num/Jun/2021" | awk '{print $1}' | sort | uniq | wc -l; done
|
||||
2021-06-10
|
||||
10693
|
||||
2021-06-11
|
||||
10587
|
||||
2021-06-12
|
||||
7958
|
||||
2021-06-13
|
||||
7681
|
||||
2021-06-14
|
||||
12639
|
||||
2021-06-15
|
||||
15388
|
||||
2021-06-16
|
||||
12245
|
||||
2021-06-17
|
||||
11187
|
||||
2021-06-18
|
||||
9684
|
||||
2021-06-19
|
||||
7835
|
||||
2021-06-20
|
||||
7198
|
||||
2021-06-21
|
||||
10380
|
||||
2021-06-22
|
||||
10255
|
||||
2021-06-23
|
||||
15878
|
||||
2021-06-24
|
||||
9963
|
||||
2021-06-25
|
||||
9439
|
||||
2021-06-26
|
||||
7930
|
||||
</code></pre><ul>
|
||||
<li>Looking in the DSpace log, the first “pool empty” message I saw this morning was at 4AM:</li>
|
||||
<li>Similarly, the number of connections to the REST API was around the average for the recent weeks before:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">2021-06-23 04:01:14,596 ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper @ [http-bio-127.0.0.1-8443-exec-4323] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:250; busy:250; idle:0; lastwait:5000].
|
||||
<pre><code class="language-console" data-lang="console"># for num in {10..26}; do echo "2021-06-$num"; zcat /var/log/nginx/rest.*.gz | grep "$num/Jun/2021" | awk '{print $1}' | sort | uniq | wc -l; done
|
||||
2021-06-10
|
||||
1183
|
||||
2021-06-11
|
||||
1074
|
||||
2021-06-12
|
||||
911
|
||||
2021-06-13
|
||||
892
|
||||
2021-06-14
|
||||
1320
|
||||
2021-06-15
|
||||
1257
|
||||
2021-06-16
|
||||
1208
|
||||
2021-06-17
|
||||
1119
|
||||
2021-06-18
|
||||
965
|
||||
2021-06-19
|
||||
985
|
||||
2021-06-20
|
||||
854
|
||||
2021-06-21
|
||||
1098
|
||||
2021-06-22
|
||||
1028
|
||||
2021-06-23
|
||||
1375
|
||||
2021-06-24
|
||||
1135
|
||||
2021-06-25
|
||||
969
|
||||
2021-06-26
|
||||
904
|
||||
</code></pre><ul>
|
||||
<li>Oh, and I notice 8,000 hits from a Flipboard bot using this user-agent:</li>
|
||||
<li>According to goaccess, the traffic spike started at 2AM (remember that the first “Pool empty” error in dspace.log was at 4:01AM):</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:49.0) Gecko/20100101 Firefox/49.0 (FlipboardProxy/1.2; +http://flipboard.com/browserproxy)
|
||||
<pre><code class="language-console" data-lang="console"># zcat /var/log/nginx/access.log.1[45].gz /var/log/nginx/library-access.log.1[45].gz | grep -E '23/Jun/2021' | goaccess --log-format=COMBINED -
|
||||
</code></pre><ul>
|
||||
<li>We can purge them, as this is not user traffic: <a href="https://about.flipboard.com/browserproxy/">https://about.flipboard.com/browserproxy/</a>
|
||||
<li>Moayad sent a fix for the add missing items plugins issue (<a href="https://github.com/ilri/OpenRXV/pull/107">#107</a>)
|
||||
<ul>
|
||||
<li>I will add it to our local user agent pattern file and eventually submit a pull request to COUNTER-Robots</li>
|
||||
<li>It works MUCH faster because it correctly identifies the missing handles in each repository</li>
|
||||
<li>Also it adds better debug messages to the api logs</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>I merged <a href="https://github.com/ilri/OpenRXV/pull/96">Moayad’s health check pull request in AReS</a> and I will deploy it on the production server soon</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-24">2021-06-24</h2>
|
||||
<ul>
|
||||
<li>I deployed the new OpenRXV code on CGSpace but I’m having problems with the indexing, something about missing the mappings on the <code>openrxv-items-temp</code> index
|
||||
<ul>
|
||||
<li>I extracted the mappings from my local instance using <code>elasticdump</code> and after putting them on CGSpace I was able to harvest…</li>
|
||||
<li>But still, there are way too many duplicates and I’m not sure what the actual number of items should be</li>
|
||||
<li>According to the OAI ListRecords for each of our repositories, we should have about:
|
||||
<ul>
|
||||
<li>MELSpace: 9537</li>
|
||||
<li>WorldFish: 4483</li>
|
||||
<li>CGSpace: 91305</li>
|
||||
<li>Total: 105325</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>Looking at the last backup I have from harvesting before these changes we have 104,000 total handles, but only 99186 unique:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' cgspace-openrxv-items-temp-backup.json | wc -l
|
||||
104797
|
||||
$ grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' cgspace-openrxv-items-temp-backup.json | sort | uniq | wc -l
|
||||
99186
|
||||
</code></pre><ul>
|
||||
<li>This number is probably unique for that particular harvest, but I don’t think it represents the true number of items…</li>
|
||||
<li>The harvest of DSpace Test I did on my local test instance yesterday has about 91,000 items:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ grep -E '"repo":"DSpace Test"' 2021-06-23-openrxv-items-final-local.json | grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' | sort | uniq | wc -l
|
||||
90990
|
||||
</code></pre><ul>
|
||||
<li>So the harvest on the live site is missing items, then why didn’t the add missing items plugin find them?!
|
||||
<ul>
|
||||
<li>I notice that we are missing the <code>type</code> in the metadata structure config for each repository on the production site, and we are using <code>type</code> for item type in the actual schema… so maybe there is a conflict there</li>
|
||||
<li>I will rename type to <code>item_type</code> and add it back to the metadata structure</li>
|
||||
<li>The add missing items definitely checks this field…</li>
|
||||
<li>I modified my local backup to add <code>type: item</code> and uploaded it to the temp index on production</li>
|
||||
<li>Oh! nginx is blocking OpenRXV’s attempt to read the sitemap:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">172.104.229.92 - - [24/Jun/2021:07:52:58 +0200] "GET /sitemap HTTP/1.1" 503 190 "-" "OpenRXV harvesting bot; https://github.com/ilri/OpenRXV"
|
||||
</code></pre><ul>
|
||||
<li>I fixed nginx so it always allows people to get the sitemap and then re-ran the plugins… now it’s checking 180,000+ handles to see if they are collections or items…
|
||||
<ul>
|
||||
<li>I see it fetched the sitemap three times, we need to make sure it’s only doing it once for each repository</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>According to the api logs we will be adding 5,697 items:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ docker logs api 2>/dev/null | grep dspace_add_missing_items | sort | uniq | wc -l
|
||||
5697
|
||||
</code></pre><ul>
|
||||
<li>Spent a few hours with Moayad troubleshooting and improving OpenRXV
|
||||
<ul>
|
||||
<li>We found a bug in the harvesting code that can occur when you are harvesting DSpace 5 and DSpace 6 instances, as DSpace 5 uses numeric (long) IDs, and DSpace 6 uses UUIDs</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-25">2021-06-25</h2>
|
||||
<ul>
|
||||
<li>The new OpenRXV code creates almost 200,000 jobs when the plugins start
|
||||
<ul>
|
||||
<li>I figured out how to use <a href="https://github.com/bee-queue/arena/tree/master/example">bee-queue/arena</a> to view our Bull job queue</li>
|
||||
<li>Also, we can see the jobs directly using redis-cli:</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ redis-cli
|
||||
127.0.0.1:6379> SCAN 0 COUNT 5
|
||||
1) "49152"
|
||||
2) 1) "bull:plugins:476595"
|
||||
2) "bull:plugins:367382"
|
||||
3) "bull:plugins:369228"
|
||||
4) "bull:plugins:438986"
|
||||
5) "bull:plugins:366215"
|
||||
</code></pre><ul>
|
||||
<li>We can apparently get the names of the jobs in each hash using <code>hget</code>:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">127.0.0.1:6379> TYPE bull:plugins:401827
|
||||
hash
|
||||
127.0.0.1:6379> HGET bull:plugins:401827 name
|
||||
"dspace_add_missing_items"
|
||||
</code></pre><ul>
|
||||
<li>I whipped up a one liner to get the keys for all plugin jobs, convert to redis <code>HGET</code> commands to extract the value of the name field, and then sort them by their counts:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ redis-cli KEYS "bull:plugins:*" \
|
||||
| sed -e 's/^bull/HGET bull/' -e 's/\([[:digit:]]\)$/\1 name/' \
|
||||
| ncat -w 3 localhost 6379 \
|
||||
| grep -v -E '^\$' | sort | uniq -c | sort -h
|
||||
3 dspace_health_check
|
||||
4 -ERR wrong number of arguments for 'hget' command
|
||||
12 mel_downloads_and_views
|
||||
129 dspace_altmetrics
|
||||
932 dspace_downloads_and_views
|
||||
186428 dspace_add_missing_items
|
||||
</code></pre><ul>
|
||||
<li>Note that this uses <code>ncat</code> to send commands directly to redis all at once instead of one at a time (<code>netcat</code> didn’t work here, as it doesn’t know when our input is finished and never quits)
|
||||
<ul>
|
||||
<li>I thought of using <code>redis-cli --pipe</code> but then you have to construct the commands in the redis protocol format with the number of args and length of each command</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>There is clearly something wrong with the new DSpace health check plugin, as it creates WAY too many jobs every time we run the plugins</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-27">2021-06-27</h2>
|
||||
<ul>
|
||||
<li>Looking into the spike in PostgreSQL connections last week
|
||||
<ul>
|
||||
<li>I see the same things that I always see (large number of connections waiting for lock, large number of threads, high CPU usage, etc), but I also see almost 10,000 DSpace sessions on 2021-06-25</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<p><img src="/cgspace-notes/2021/06/dspace-sessions-week.png" alt="DSpace sessions"></p>
|
||||
<ul>
|
||||
<li>Looking at the DSpace log I see there was definitely a higher number of sessions that day, perhaps twice the normal:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ for file in dspace.log.2021-06-[12]*; do echo "$file"; grep -oE 'session_id=[A-Z0-9]{32}' "$file" | sort | uniq | wc -l; done
|
||||
dspace.log.2021-06-10
|
||||
19072
|
||||
dspace.log.2021-06-11
|
||||
19224
|
||||
dspace.log.2021-06-12
|
||||
19215
|
||||
dspace.log.2021-06-13
|
||||
16721
|
||||
dspace.log.2021-06-14
|
||||
17880
|
||||
dspace.log.2021-06-15
|
||||
12103
|
||||
dspace.log.2021-06-16
|
||||
4651
|
||||
dspace.log.2021-06-17
|
||||
22785
|
||||
dspace.log.2021-06-18
|
||||
21406
|
||||
dspace.log.2021-06-19
|
||||
25967
|
||||
dspace.log.2021-06-20
|
||||
20850
|
||||
dspace.log.2021-06-21
|
||||
6388
|
||||
dspace.log.2021-06-22
|
||||
5945
|
||||
dspace.log.2021-06-23
|
||||
46371
|
||||
dspace.log.2021-06-24
|
||||
9024
|
||||
dspace.log.2021-06-25
|
||||
12521
|
||||
dspace.log.2021-06-26
|
||||
16163
|
||||
dspace.log.2021-06-27
|
||||
5886
|
||||
</code></pre><ul>
|
||||
<li>I see 15,000 unique IPs in the XMLUI logs alone on that day:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console"># zcat /var/log/nginx/access.log.5.gz /var/log/nginx/access.log.4.gz | grep '23/Jun/2021' | awk '{print $1}' | sort | uniq | wc -l
|
||||
15835
|
||||
</code></pre><ul>
|
||||
<li>Annoyingly I found 37,000 more hits from Bing using <code>dns:*msnbot* AND dns:*.msn.com.</code> as a Solr filter
|
||||
<ul>
|
||||
<li>WTF, they are using a normal user agent: <code>Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko</code></li>
|
||||
<li>I will purge the IPs and add this user agent to the nginx config so that we can rate limit it</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>I signed up for Bing Webmaster Tools and verified cgspace.cgiar.org with the BingSiteAuth.xml file
|
||||
<ul>
|
||||
<li>Also I adjusted the nginx config to explicitly allow access to <code>robots.txt</code> even when bots are rate limited</li>
|
||||
<li>Also I found that Bing was auto discovering all our RSS and Atom feeds as “sitemaps” so I deleted 750 of them and submitted the real sitemap</li>
|
||||
<li>I need to see if I can adjust the nginx config further to map the <code>bot</code> user agent to DNS like msnbot…</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>Review Abdullah’s filter on click pull request
|
||||
<ul>
|
||||
<li>I rebased his code on the latest master branch and tested adding filter on click to the map and list components, and it works fine</li>
|
||||
<li>There seems to be a bug that breaks scrolling on the page though…</li>
|
||||
<li>Abdullah fixed the bug in the filter on click branch</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-28">2021-06-28</h2>
|
||||
<ul>
|
||||
<li>Some work on OpenRXV
|
||||
<ul>
|
||||
<li>Integrate <code>prettier</code> into the frontend and backend and format everything on the <code>master</code> branch</li>
|
||||
<li>Re-work the GitHub Actions workflow for frontend and add one for backend</li>
|
||||
<li>The workflows run <code>npm install</code> to test dependencies, and <code>npm ci</code> with <code>prettier</code> to check formatting</li>
|
||||
<li>Also I merged Abdallah’s filter on click pull request</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<h2 id="2021-06-30">2021-06-30</h2>
|
||||
<ul>
|
||||
<li>CGSpace is showing a blank white page…
|
||||
<ul>
|
||||
<li>The status is HTTP 200, but it’s blank white… so UptimeRobot didn’t send a notification!</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>The DSpace log shows:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">2021-06-30 08:19:15,874 ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper @ Cannot get a connection, pool error Timeout waiting for idle object
|
||||
</code></pre><ul>
|
||||
<li>The first one of these I see is from last night at 2021-06-29 at 10:47 PM</li>
|
||||
<li>I restarted Tomcat 7 and CGSpace came back up…</li>
|
||||
<li>I didn’t see that Atmire had responded last week (on 2021-06-23) about the issues we had
|
||||
<ul>
|
||||
<li>He said they had to do the same thing that they did last time: switch to the postgres user and kill all activity</li>
|
||||
<li>He said they found tons of connections to the REST API, like 3-4 per second, and asked if that was normal</li>
|
||||
<li>I pointed him to our Tomcat server.xml configuration, saying that we purposefully isolated the Tomcat connection pools between the API and XMLUI for this purpose…</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>Export a list of all CGSpace’s AGROVOC keywords with counts for Enrico and Elizabeth Arnaud to discuss with AGROVOC:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">localhost/dspace63= > \COPY (SELECT DISTINCT text_value AS "dcterms.subject", count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id = 187 GROUP BY "dcterms.subject" ORDER BY count DESC) to /tmp/2021-06-30-agrovoc.csv WITH CSV HEADER;
|
||||
COPY 20780
|
||||
</code></pre><ul>
|
||||
<li>Actually Enrico wanted NON AGROVOC, so I extracted all the center and CRP subjects (ignoring system office and themes):</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">localhost/dspace63= > \COPY (SELECT DISTINCT LOWER(text_value) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242) GROUP BY subject ORDER BY count DESC) to /tmp/2021-06-30-non-agrovoc.csv WITH CSV HEADER;
|
||||
COPY 1710
|
||||
</code></pre><ul>
|
||||
<li>Fix an issue in the Ansible infrastructure playbooks for the DSpace role
|
||||
<ul>
|
||||
<li>It was causing the template module to fail when setting up the npm environment</li>
|
||||
<li>We needed to install <code>acl</code> so that Ansible can use <code>setfacl</code> on the target file before becoming an unprivileged user</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>I saw a strange message in the Tomcat 7 journal on DSpace Test (linode26):</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">Jun 30 16:00:09 linode26 tomcat7[30294]: WARNING: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [111,733] milliseconds.
|
||||
</code></pre><ul>
|
||||
<li>What’s even crazier is that it is twice that on CGSpace (linode18)!</li>
|
||||
<li>Apparently OpenJDK defaults to using <code>/dev/random</code> (see <code>/etc/java-8-openjdk/security/java.security</code>):</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">securerandom.source=file:/dev/urandom
|
||||
</code></pre><ul>
|
||||
<li><code>/dev/random</code> blocks and can take a long time to get entropy, and urandom on modern Linux is a cryptographically secure pseudorandom number generator
|
||||
<ul>
|
||||
<li>Now Tomcat starts much faster and no warning is printed so I’m going to add this to our Ansible infrastructure playbooks</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>Interesting resource about the lore behind the <code>/dev/./urandom</code> workaround that is posted all over the Internet, apparently due to a bug in early JVMs: <a href="https://bugs.java.com/bugdatabase/view_bug.do?bug_id=6202721">https://bugs.java.com/bugdatabase/view_bug.do?bug_id=6202721</a></li>
|
||||
<li>I’m experimenting with using PgBouncer for pooling instead of Tomcat’s JDBC</li>
|
||||
</ul>
|
||||
<!-- raw HTML omitted -->
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="404 Page not found"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Categories"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -18,7 +18,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="CGIAR Library Migration"/>
|
||||
<meta name="twitter:description" content="Notes on the migration of the CGIAR Library to CGSpace"/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -18,7 +18,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="CGSpace CG Core v2 Migration"/>
|
||||
<meta name="twitter:description" content="Possible changes to CGSpace metadata fields to align more with DC, QDC, and DCTERMS as well as CG Core v2."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -18,7 +18,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="CGSpace DSpace 6 Upgrade"/>
|
||||
<meta name="twitter:description" content="Documenting the DSpace 6 upgrade."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="CGSpace Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="CGSpace Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="CGSpace Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="CGSpace Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="CGSpace Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="CGSpace Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="CGSpace Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="CGSpace Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Posts"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Posts"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Posts"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Posts"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Posts"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Posts"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Posts"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Posts"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Tags"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Migration"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Notes"/>
|
||||
<meta name="twitter:description" content="Documenting day-to-day work on the [CGSpace](https://cgspace.cgiar.org) repository."/>
|
||||
<meta name="generator" content="Hugo 0.84.4" />
|
||||
<meta name="generator" content="Hugo 0.85.0" />
|
||||
|
||||
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user