mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2024-12-23 13:34:32 +01:00
28 KiB
28 KiB
title | date | author | categories | |
---|---|---|---|---|
September, 2022 | 2022-09-01T09:41:36+03:00 | Alan Orth |
|
2022-09-01
- A bit of work on the "Mapping CG Core–CGSpace–MEL–MARLO Types" spreadsheet
- I tested an item submission on DSpace Test with the Cocoon
org.apache.cocoon.uploads.autosave=false
change- The submission works as expected
- Start debugging some region-related issues with csv-metadata-quality
- I created a new test file
test-geography.csv
with some different scenarios - I also fixed a few bugs and improved the region-matching logic
- I created a new test file
- I filed an issue for the "South-eastern Asia" case mismatch in country_converter on GitHub
- Meeting with Moayad to discuss OpenRXV developments
- He demoed his new multiple dashboards feature and I helped him rebase those changes to master so we can test them more
2022-09-02
- I worked a bit more on exclusion and skipping logic in csv-metadata-quality
- I also pruned and updated all the Python dependencies
- Then I released version 0.6.0 now that the excludes and region matching support is working way better
2022-09-05
- Started a harvest on AReS last night
- Looking over the Solr statistics from last month I see many user agents that look suspicious:
- Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.2; WOW64; Trident/7.0; .NET4.0E; .NET4.0C)
- Mozilla / 5.0(Windows NT 10.0; Win64; x64) AppleWebKit / 537.36(KHTML, like Gecko) Chrome / 77.0.3865.90 Safari / 537.36
- Mozilla/5.0 (Windows NT 10.0; WOW64; Rv:50.0) Gecko/20100101 Firefox/50.0
- Mozilla/5.0 (X11; Linux i686; rv:2.0b12pre) Gecko/20110204 Firefox/4.0b12pre
- Mozilla/5.0 (Windows NT 10.0; Win64; x64; Xbox; Xbox One) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36 Edge/44.18363.8131
- Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)
- Mozilla/4.0 (compatible; MSIE 4.5; Windows 98;)
- curb
- bitdiscovery
- omgili/0.5 +http://omgili.com
- Mozilla/5.0 (compatible)
- Vizzit
- Mozilla/5.0 (Windows NT 5.1; rv:52.0) Gecko/20100101 Firefox/52.0
- Mozilla/5.0 (Android; Mobile; rv:13.0) Gecko/13.0 Firefox/13.0
- Java/17-ea
- AdobeUxTechC4-Async/3.0.12 (win32)
- ZaloPC-win32-24v473
- Mozilla/5.0/Firefox/42.0 - nbertaupete95(at)gmail.com
- Scoop.it
- Mozilla/5.0 (Windows NT 6.1; rv:27.0) Gecko/20100101 Firefox/27.0
- Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)
- ows NT 10.0; WOW64; rv: 50.0) Gecko/20100101 Firefox/50.0
- WebAPIClient
- Mozilla/5.0 Firefox/26.0
- Mozilla/5.0 (compatible; woorankreview/2.0; +https://www.woorank.com/)
- For example, some are apparently using versions of Firefox that are over ten years old, and some are obviously trying to look like valid user agents, but making typos (
Mozilla / 5.0
) - Tons of hosts making requests likt this:
GET /bitstream/handle/10568/109408/Milk%20testing%20lab%20protocol.pdf?sequence=1&isAllowed=\x22><script%20>alert(String.fromCharCode(88,83,83))</script> HTTP/1.1" 400 5 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Rv:50.0) Gecko/20100101 Firefox/50.0
- I got a list of hosts making requests like that so I can purge their hits:
# zcat /var/log/nginx/{access,library-access,oai,rest}.log.[123]*.gz | grep 'String.fromCharCode(' | awk '{print $1}' | sort -u > /tmp/ips.txt
- I purged 4,718 hits from IPs
- I see some new Hetzner ranges that I hadn't blocked yet apparently?
- I got a list of Hetzner's IPs from IP Quality Score then added them to the existing ones in my Ansible playbooks:
$ awk '{print $1}' /tmp/hetzner.txt | wc -l
36
$ sort -u /tmp/hetzner-combined.txt | wc -l
49
- I will add this new list to nginx's
bot-networks.conf
so they get throttled on scraping XMLUI and get classified as bots in Solr statistics - Then I purged hits from the following user agents:
$ ./ilri/check-spider-hits.sh -f /tmp/agents
Found 374 hits from curb in statistics
Found 350 hits from bitdiscovery in statistics
Found 564 hits from omgili in statistics
Found 390 hits from Vizzit in statistics
Found 9125 hits from AdobeUxTechC4-Async in statistics
Found 97 hits from ZaloPC-win32-24v473 in statistics
Found 518 hits from nbertaupete95 in statistics
Found 218 hits from Scoop.it in statistics
Found 584 hits from WebAPIClient in statistics
Total number of hits from bots: 12220
- Then I will add these user agents to the ILRI spider override in DSpace
2022-09-06
- I'm testing dspace-statistics-api with our DSpace 7 test server
- After setting up the env and the database the
python -m dspace_statistics_api.indexer
runs without issues - While playing with Solr I tried to search for statistics from this month using
time:2022-09*
but I get this error: "Can't run prefix queries on numeric fields" - I guess that the syntax in Solr changed since 4.10...
- This works, but is super annoying:
time:[2022-09-01T00:00:00Z TO 2022-09-30T23:59:59Z]
- After setting up the env and the database the
2022-09-07
- I tested the controlled-vocabulary changes on DSpace 6 and they work fine
- Last week I found that DSpace 7 is more strict with controlled vocabularies and requires IDs for all node values
- This is a pain because it means I have to re-do the IDs in each file every time I update them
- If I add
id="0000"
to each, then I can use this vim expressionlet i=0001 | g/0000/s//\=i/ | let i=i+1
to replace the numbers with increments starting from 1
- Meeting with Marie Angelique, Abenet, Sarа, аnd Margarita to continue the discussion about Types from last week
- We made progress with concrete actions and will continue next week
2022-09-08
- I had a meeting with Nicky from UNEP to discuss issues they are having with their DSpace
- I told her about the meeting of DSpace community people that we're planning at ILRI in the next few weeks
2022-09-09
- Add some value mappings to AReS because I see a lot of incorrect regions and countries
- I also found some values that were blank in CGSpace so I deleted them:
dspace=# BEGIN;
BEGIN
dspace=# DELETE FROM metadatavalue WHERE dspace_object_id IN (SELECT uuid FROM item) AND text_value='';
DELETE 70
dspace=# COMMIT;
COMMIT
- Start a full Discovery index on CGSpace to catch these changes in the Discovery
2022-09-11
- Today is Sunday and I see the load on the server is high
- Google and a bunch of other bots have been blocked on XMLUI for the past two weeks so it's not from them!
- Looking at the top IPs this morning:
# cat /var/log/nginx/{access,library-access,oai,rest}.log /var/log/nginx/{access,library-access,oai,rest}.log.1 | grep '11/Sep/2022' | awk '{print $1}' | sort | uniq -c | sort -h | tail -n 40
...
165 64.233.172.79
166 87.250.224.34
200 69.162.124.231
202 216.244.66.198
385 207.46.13.149
398 207.46.13.147
421 66.249.64.185
422 157.55.39.81
442 2a01:4f8:1c17:5550::1
451 64.124.8.36
578 137.184.159.211
597 136.243.228.195
1185 66.249.64.183
1201 157.55.39.80
3135 80.248.237.167
4794 54.195.118.125
5486 45.5.186.2
6322 2a01:7e00::f03c:91ff:fe9a:3a37
9556 66.249.64.181
- The top is still Google, but all the requests are HTTP 503 because I classified them as bots for XMLUI at least
- Then there's 80.248.237.167, which is using a normal user agent and scraping Discovery
- That IP is on Internet Vikings aka Internetbolaget and we are already marking that subnet as 'bot' for XMLUI so most of these requests are HTTP 503
- On another note, I'm curious to explore enabling caching of certain REST API responses
- For example, where the use is for harvesting rather than actual clients getting bitstreams or thumbnails, it seems there might be a benefit to speeding these up for subsequent requestors:
# awk '{print $7}' /var/log/nginx/rest.log | grep -v retrieve | sort | uniq -c | sort -h | tail -n 10
4 /rest/items/3f692ddd-7856-4bf0-a587-99fb3df0688a/bitstreams
4 /rest/items/3f692ddd-7856-4bf0-a587-99fb3df0688a/metadata
4 /rest/items/b014e36f-b496-43d8-9148-cc9db8a6efac/bitstreams
4 /rest/items/b014e36f-b496-43d8-9148-cc9db8a6efac/metadata
5 /rest/handle/10568/110310?expand=all
5 /rest/handle/10568/89980?expand=all
5 /rest/handle/10568/97614?expand=all
6 /rest/handle/10568/107086?expand=all
6 /rest/handle/10568/108503?expand=all
6 /rest/handle/10568/98424?expand=all
- I specifically have to not cache things like requests for bitstreams because those are from actual users and we need to keep the real requests so we get the statistics hit
- Will be interesting to check the results above as the day goes on (now 10AM)
- To estimate the potential savings from caching I will check how many non-bitstream requests are made versus how many are made more than once (updated the next morning using yesterday's log):
# awk '{print $7}' /var/log/nginx/rest.log.1 | grep -v retrieve | sort -u | wc -l
33733
# awk '{print $7}' /var/log/nginx/rest.log.1 | grep -v retrieve | sort | uniq -c | awk '$1 > 1' | wc -l
5637
- In the afternoon I started a harvest on AReS (which should affect the numbers above also)
- I enabled an nginx proxy cache on DSpace Test for this location regex:
location ~ /rest/(handle|items|collections|communities)/.+
2022-09-12
- I am testing harvesting DSpace Test via AReS with the nginx proxy cache enabled
- I had to tune the regular expression in nginx a bit because the REST requests OpenRXV uses weren't matching
- Now I'm trying this one:
/rest/(handle|items|collections|communities)/?
- Testing in regex101.com with this test string:
/rest/handle/10568/27611
/rest/items?expand=metadata,parentCommunityList,parentCollectionList,bitstreams&limit=10&offset=36270
/rest/handle/10568/110310?expand=all
/rest/rest/bitstreams/28926633-c7c2-49c2-afa8-6d81cadc2316/retrieve
/rest/bitstreams/15412/retrieve
/rest/items/083dbb0d-11e2-4dfe-902b-eb48e4640d04/metadata
/rest/items/083dbb0d-11e2-4dfe-902b-eb48e4640d04/bitstreams
/rest/collections/edea23c0-0ebd-4525-90b0-0b401f997704/items
/rest/items/14507941-aff2-4d57-90bd-03a0733ad859/metadata
/rest/communities/b38ea726-475f-4247-a961-0d0b76e67f85/collections
/rest/collections/e994c450-6ff7-41c6-98df-51e5c424049e/items?limit=10000
- I estimate that it will take about 1GB of cache to harvest 100,000 items from CGSpace with OpenRXV (10,000 pages)
- Basically all but 4 and 5 (bitstreams) should match
- Upload 682 OICRs from MARLO to CGSpace
- We had tested these on DSpace Test last month along with the MELIAs, Policies, and Innovations, but we decided to upload the OICRs first so that other things can link against them as related items
2022-09-14
- Meeting with Peter, Abenet, Indira, and Michael about CGSpace rollout plan for the Initiatives
2022-09-16
- Meeting with Marie-Angeqlique, Abenet, Margarita, and Sara about types for CG Core
- We are about halfway through the list of types now, with concrete actions for CG Core and CGSpace
- We will meet next in two weeks to hopefully finalize the list, then we can move on to definitions
2022-09-18
- Deploy the
org.apache.cocoon.uploads.autosave=false
change on CGSpace - Start a harvest on AReS
2022-09-19
- Deploy the nginx proxy cache for /rest requests on CGSpace
- I had tested this last week on DSpace Test
- By my counts on CGSpace yesterday (Sunday, a busy day for the REST API), we had 5,654 URLs that were requested more than twice, and it tails off after that towards two, three, four, etc:
# awk '{print $7}' /var/log/nginx/rest.log.1 | grep -v retrieve | sort | uniq -c | awk '$1 > 1' | wc -l
5654
# awk '{print $7}' /var/log/nginx/rest.log.1 | grep -v retrieve | sort | uniq -c | awk '$1 == 2' | wc -l
4710
# awk '{print $7}' /var/log/nginx/rest.log.1 | grep -v retrieve | sort | uniq -c | awk '$1 == 3' | wc -l
814
# awk '{print $7}' /var/log/nginx/rest.log.1 | grep -v retrieve | sort | uniq -c | awk '$1 == 4' | wc -l
86
# awk '{print $7}' /var/log/nginx/rest.log.1 | grep -v retrieve | sort | uniq -c | awk '$1 == 5' | wc -l
39
- For now I guess requests that were done two or three times by different clients will be cached and that's a win, and I expect more and more REST API activity soon when initiatives and One CGIAR stuff picks up
2022-09-20
- I checked the status of the nginx REST API cache on CGSpace and it was stuck at 7,083 items for hours:
# find /var/cache/nginx/rest_cache/ -type f | wc -l
7083
- The proxy cache key zone is currently 1m, which can store ~8,000 keys, so that could be what we're running into
- I increased it to 2m and will keep monitoring it
- CIP webmaster contacted me to say they are having problems harvesting CGSpace from their WordPress
- I am not sure if there are issues due to the REST API caching I enabled...
2022-09-21
- Planning the Nairobi DSpace Users meeting with Abenet
- Planning to have a call about MEL submitting to CGSpace on Monday with Mohammed Salem
- I created two collections on DSpace Test: one with a workflow, and one without
- According to my notes from [2020-10]({{< relref "2020-10.md" >}}) the account must be in the admin group in order to submit via the REST API, so I added it to the admin group of each collection
2022-09-22
- Nairobi DSpace users meeting at ILRI
- I found a few users that didn't have ORCID iDs and were missing tags on CGSpace so I tagged them:
dc.contributor.author,cg.creator.identifier
dc.contributor.author,cg.creator.identifier
"Alonso, Silvia","Silvia Alonso: 0000-0002-0565-536X"
"Goopy, John P.","John Goopy: 0000-0001-7177-1310"
"Korir, Daniel","Daniel Korir: 0000-0002-1356-8039"
"Leitner, Sonja","Sonja Leitner: 0000-0002-1276-8071"
"Fèvre, Eric M.","Eric M. Fèvre: 0000-0001-8931-4986"
"Galiè, Alessandra","Alessandra Galie: 0000-0001-9868-7733"
"Baltenweck, Isabelle","Isabelle Baltenweck: 0000-0002-4147-5921"
"Robinson, Timothy P.","Timothy Robinson: 0000-0002-4266-963X"
"Lannerstad, Mats","Mats Lannerstad: 0000-0002-5116-3198"
"Graham, Michael","Michael Graham: 0000-0002-1189-8640"
"Merbold, Lutz","Lutz Merbold: 0000-0003-4974-170X"
"Rufino, Mariana C.","Mariana Rufino: 0000-0003-4293-3290"
"Wilkes, Andreas","Andreas Wilkes: 0000-0001-7546-991X"
"van der Weerden, T.","Tony van der Weerden: 0000-0002-6999-2584"
"Vermeulen, S.","Sonja Vermeulen: 0000-0001-6242-9513"
"Vermeulen, Sonja","Sonja Vermeulen: 0000-0001-6242-9513"
"Vermeulen, Sonja J.","Sonja Vermeulen: 0000-0001-6242-9513"
"Hung Nguyen-Viet","Hung Nguyen-Viet: 0000-0003-1549-2733"
"Herrero, Mario T.","Mario Herrero: 0000-0002-7741-5090"
"Thornton, Philip K.","Philip Thornton: 0000-0002-1854-0182"
"Duncan, Alan J.","Alan Duncan: 0000-0002-3954-3067"
"Lukuyu, Ben A.","Ben Lukuyu: 0000-0002-9374-3553"
"Lindahl, Johanna F.","Johanna Lindahl: 0000-0002-1175-0398"
"Okeyo Mwai, Ally","Ally Okeyo Mwai: 0000-0003-2379-7801"
"Wieland, Barbara","Barbara Wieland: 0000-0003-4020-9186"
"Omore, Amos O.","Amos Omore: 0000-0001-9213-9891"
"Randolph, Thomas F.","Thomas Fitz Randolph: 0000-0003-1849-9877"
"Staal, Steven J.","Steven Staal: 0000-0002-1244-1773"
"Hanotte, Olivier H.","Olivier Hanotte: 0000-0002-2877-4767"
"Dessie, Tadelle","Tadelle Dessie: 0000-0002-1630-0417"
"Dione, Michel M.","Michel Dione: 0000-0001-7812-5776"
"Gebremedhin, Berhanu","Berhanu Gebremedhin: 0000-0002-3168-2783"
"Ouma, Emily A.","Emily Ouma: 0000-0002-3123-1376"
"Roesel, Kristina","Kristina Roesel: 0000-0002-2553-1129"
"Bishop, Richard P.","Richard Bishop: 0000-0002-3720-9970"
"Lapar, Ma. Lucila","Ma. Lucila Lapar: 0000-0002-4214-9845"
"Rich, Karl M.","Karl Rich: 0000-0002-5581-9553"
"Hoekstra, Dirk","Dirk Hoekstra: 0000-0002-6111-6627"
"Nene, Vishvanath","Vishvanath Nene: 0000-0001-7066-4169"
"Patel, S.P.","Sonal Henson: 0000-0002-2002-5462"
"Hanson, Jean","Jean Hanson: 0000-0002-3648-2641"
"Marshall, Karen","Karen Marshall: 0000-0003-4197-1455"
"Notenbaert, An Maria Omer","An Maria Omer Notenbaert: 0000-0002-6266-2240"
"Ojango, Julie M.K.","Ojango J.M.K.: 0000-0003-0224-5370"
"Wijk, Mark T. van","Mark van Wijk: 0000-0003-0728-8839"
"Tarawali, Shirley A.","Shirley Tarawali: 0000-0001-9398-8780"
"Naessens, Jan","Jan Naessens: 0000-0002-7075-9915"
"Butterbach-Bahl, Klaus","Klaus Butterbach-Bahl: 0000-0001-9499-6598"
"Poole, Elizabeth J.","Elizabeth Jane Poole: 0000-0002-8570-794X"
"Mulema, Annet A.","Annet Mulema: 0000-0003-4192-3939"
"Dror, Iddo","Iddo Dror: 0000-0002-0800-7456"
"Ballantyne, Peter G.","Peter G. Ballantyne: 0000-0001-9346-2893"
"Baker, Derek","Derek Baker: 0000-0001-6020-6973"
"Ericksen, Polly J.","Polly Ericksen: 0000-0002-5775-7691"
"Jones, Christopher S.","Chris Jones: 0000-0001-9096-9728"
"Mude, Andrew G.","Andrew Mude: 0000-0003-4903-6613"
"Puskur, Ranjitha","Ranjitha Puskur: 0000-0002-9112-3414"
"Kiara, Henry K.","Henry Kiara: 0000-0001-9578-1636"
"Gibson, John P.","John Gibson: 0000-0003-0371-2401"
"Flintan, Fiona E.","Fiona Flintan: 0000-0002-9732-097X"
"Mrode, Raphael A.","Raphael Mrode: 0000-0003-1964-5653"
"Mtimet, Nadhem","Nadhem Mtimet: 0000-0003-3125-2828"
"Said, Mohammed Yahya","Mohammed Yahya Said: 0000-0001-8127-6399"
"Pezo, Danilo A.","Danilo Pezo: 0000-0001-5345-5314"
"Haileslassie, Amare","Amare Haileslassie: 0000-0001-5237-9006"
"Wright, Iain A.","Iain Wright: 0000-0002-6216-5308"
"Cadilhon, Joseph J.","Jean-Joseph Cadilhon: 0000-0002-3181-5136"
"Domelevo Entfellner, Jean-Baka","Jean-Baka Domelevo Entfellner: 0000-0002-8282-1325"
"Oyola, Samuel O.","Samuel O. Oyola: 0000-0002-6425-7345"
"Agaba, M.","Morris Agaba: 0000-0001-6777-0382"
"Beebe, Stephen E.","Stephen E Beebe: 0000-0002-3742-9930"
"Ouso, Daniel","Daniel Ouso: 0000-0003-0994-2558"
"Ouso, Daniel O.","Daniel Ouso: 0000-0003-0994-2558"
"Rono, Gilbert K.","Gilbert Kibet-Rono: 0000-0001-8043-5423"
"Kibet, Gilbert","Gilbert Kibet-Rono: 0000-0001-8043-5423"
"Juma, John","John Juma: 0000-0002-1481-5337"
"Juma, J.","John Juma: 0000-0002-1481-5337"
$ ./ilri/add-orcid-identifiers-csv.py -i /tmp/2022-09-22-add-orcids.csv -db dspace -u dspace -p 'fuuu'
- This adds nearly 5,500 ORCID tags!
- Some of these authors were not in the controlled vocabulary so I added them
2022-09-23
- Tag some more ORCID metdata (amended above)
- Meeting with Peter and Abenet to discuss CGSpace issues
- We found a workable solution to the MEL submission issue: they can submit to a dedicated MEL-only collection with no workflow and we will map or move the items after
- Pascal says that they have made a pull request for their duplicate checker on DSpace 7 yayyyy
2022-09-24
- Found some more ORCID identifiers to tag so I added them to the list above
- Start a harvest on AReS around 8PM on Saturday night
2022-09-25
- The harvest on AReS finished and now the load on CGSpace server is still high like always on Sunday mornings
- UptimeRobot says it's down sigh...
- I had an idea to include the HTTP Accept header in the nginx proxy cache key to fix the issue we had with CIP last week
- It seems to work:
$ http --print Hh 'https://dspacetest.cgiar.org/rest/items?expand=metadata,parentCommunityList,parentCollectionList,bitstreams&limit=10&offset=60'
...
Content-Type: application/json
X-Cache-Status: MISS
$ http --print Hh 'https://dspacetest.cgiar.org/rest/items?expand=metadata,parentCommunityList,parentCollectionList,bitstreams&limit=10&offset=60'
...
Content-Type: application/json
X-Cache-Status: HIT
$ http --print Hh 'https://dspacetest.cgiar.org/rest/items?expand=metadata,parentCommunityList,parentCollectionList,bitstreams&limit=10&offset=60' Accept:application/xml
...
Content-Type: application/xml
X-Cache-Status: MISS
$ http --print Hh 'https://dspacetest.cgiar.org/rest/items?expand=metadata,parentCommunityList,parentCollectionList,bitstreams&limit=10&offset=60' Accept:application/xml
...
Content-Type: application/xml
X-Cache-Status: HIT
- This effectively makes our cache half as effective, but hopefully as more people start harvesting the number of requests handled by it will go up
- I will enable this on CGSpace and email Moises from CIP to check if their harvester is working
2022-09-26
- Update welcome text on CGSpace after our meeting last week
- I found another dozen or so ORCIDs for top authors on ILRI's community on CGSpace and tagged them (~1,100 more metadata fields)
- Last week we discussed moving
cg.identifier.googleurl
tocg.identifier.url
since there is no need to treat Google Books URLs specially anymore as far as we know- I made the changes to the submission form and the XMLUI item displays, then moved all existing metadata in PostgreSQL:
dspace= ☘ UPDATE metadatavalue SET metadata_field_id=219 WHERE dspace_object_id IN (SELECT uuid FROM item) AND metadata_field_id=222;
UPDATE 1137
- Then I deleted
cg.identifier.googleurl
from the metadata registry - Meeting with Salem, Svetlana, Valentina, and Abenet about MEL depositing to CGSpace for the initiatives
- Submitting to a collection without a workflow works as expected, and we can even select another collection (with a workflow) to map the item to from the MEL submission
- The three minor issues we found were:
- MEL still doesn't send the bitstream
- MEL sends metadata with a download URL on mel.cgiar.org
- MEL sends a JPEG that says "no thumbnail" when an item doesn't have a thumbnail
- I still need to send feedback to the group
2022-09-27
- Find a few more ORCID identifiers missing for ILRI authors and add them to the controlled vocabulary and tag the authors on CGSpace
- Moises from CIP says the WordPress importer worked fine with the current nginx proxy cache settings so it seems adding the HTTP Accept header to the cache key worked
- Update my DSpace 7 environments to 7.4-SNAPSHOT
- I see they have added thumbnails in some places now
- Oh nice, they also added the "recent submissions" to the home page
- While talking with Salem about the MEL depositing to CGSpace we discovered an issue with HTTP DELETE on
/items/{item id}/bitstreams/{bitstream id}
or/bitstreams/{bitstream id}
- DSpace removes the bitstream but keeps the empty
THUMBNAIL
bundle, which breaks the display in XMLUI
- DSpace removes the bitstream but keeps the empty
- Meeting with Enrico et al about PRMS reporting for the initiatives
2022-09-28
- I was reading the source code for DSpace 6's REST API and found that it's not possible to specify a bundle while POSTing a bitstream
- I asked Salem how they do it on MEL and he said they pretend to be a human and do it via XMLUI!
- I added a few new ILRI subjects to the input forms on CGSpace
- Both "bushmeat" and "wildlife conservation" are AGROVOC terms, but "wild meat" is not
- The distinction ILRI would like to start making is:
Meat comes from any animal, and when at ILRI we specifically make reference to it in the context of livestock. However the word bushmeat refers to illegal harvesting of meat. wild meat is being used as legal harvesting of meat from wildlife and not from livestock.
- I added a few more CGIAR authors ORCID identifiers to our controlled vocabulary and tagged them on CGSpace (~450 more metadata fields)
- Talking to Salem about ORCID identifiers, we compared list and they have a bunch that we don't have:
$ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-identifier.xml ~/Downloads/MEL_ORCID_2022-09-28.csv | \
grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | \
sort | \
uniq > /tmp/2022-09-29-combined-orcids.txt
$ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-identifier.xml | grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort | uniq | wc -l
1421
$ wc -l /tmp/2022-09-29-combined-orcids.txt
1905 /tmp/2022-09-29-combined-orcids.txt
- After combining them I ran them through my
resolve-orcids.py
script:
$ ./ilri/resolve-orcids.py -i /tmp/2022-09-29-combined-orcids.txt -o /tmp/2022-09-29-combined-orcids-names.txt -d
- I wrote a script
update-orcids.py
to read a list of names and ORCID identifiers and update existing metadata in the database to the latest name format
$ ./ilri/update-orcids.py -i ~/src/git/cgspace-submission-guidelines/content/terms/cg-creator-identifier/cg-creator-identifier.txt -db dspace -u dspace -p 'fuuu' -m 247 -d
Connected to database.
Fixed 9 occurences of: ADEBOWALE AD AKANDE: 0000-0002-6521-3272
Fixed 43 occurences of: Alamu Emmanuel Oladeji (PhD, FIFST, MNIFST): 0000-0001-6263-1359
Fixed 3 occurences of: Alessandra Galie: 0000-0001-9868-7733
Fixed 1 occurences of: Amanda De Filippo: 0000-0002-1536-3221
...
2022-09-29
- I've been checking the size of the nginx proxy cache the last few days and it always seems to hover around 14,000 entries and 385MB:
# find /var/cache/nginx/rest_cache/ -type f | wc -l
14202
# du -sh /var/cache/nginx/rest_cache
384M /var/cache/nginx/rest_cache
- Also on that note I'm trying to implement a workaround for a potential caching issue that causes MEL to not be able to update items on DSpace Test
- I think we might need to allow requests with a JSESSIONID to bypass the cache, but I have to verify with Salem
- We can do this with an nginx map:
# Check if the JSESSIONID cookie is present and contains a 32-character hex
# value, which would mean that a user is actively attempting to re-use their
# Tomcat session. Then we set the $active_user_session variable and use it
# to bypass the nginx proxy cache in REST requests.
map $cookie_jsessionid $active_user_session {
# requests with an empty key are not evaluated by limit_req
# see: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
default '';
'~[A-Z0-9]{32}' 1;
}
- Then in the location block where we do the proxy cache:
# Don't cache when user Shift-refreshes (Cache-Control: no-cache) or
# when a client has an active session (see the $cookie_jsessionid map).
proxy_cache_bypass $http_cache_control $active_user_session;
proxy_no_cache $http_cache_control $active_user_session;
- I found one client making 10,000 requests using a Windows 98 user agent:
Mozilla/4.0 (compatible; MSIE 5.00; Windows 98)
- They all come from one IP address (129.227.149.43) in Hong Kong
- The IP belongs to a hosting provider called Zenlayer
- I will add this IP to the nginx bot networks and purge its hits
$ ./ilri/check-spider-ip-hits.sh -f /tmp/ip -p
Purging 33027 hits from 129.227.149.43 in statistics
Total number of bot hits purged: 33027
- So it seems we've seen this bot before and the total number is much higher than the 10,000 this month
- I had a call with Salem and we verified that the nginx cache bypass for clients who provide a JSESSIONID fixes their issue with updating items/bitstreams from MEL
- The issue was that they delete all metadata and bitstreams, then add them again to make sure everything is up to date, and in that process they also re-request the item with all expands to get the bitstreams, which ends up getting cached and then they try to delete the old bitstream
- I also noticed that someone made a pull request to enable POSTing bitstreams to a particular bundle and it works, so that's awesome!
2022-09-30
- I applied the patch for POSTing bitstreams to other bundles on CGSpace
- Testing a few other DSpace 6.4 patches on DSpace Test:
- DS-3791 Make sure the "yearDifference" takes into account that a gap of 10 year contains 11 years
- DS-3873 Limit the usage of PDFBoxThumbnail to PDFs
- Reduce itemCounter init
- ImageMagick: Only execute "identify" on first page
- DS-3881: Show no total results on search-filter
- pass value instead of qualifier to method
- dspace-api: check for null AND empty qualifier in findByElement()
- Avoid exporting mapped Item more than once
- [DS-4574] v. 6 - Upgrade DBCP2 dependency
- bump up pdfbox version on 6.x to match main branch