mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2024-11-09 16:45:45 +01:00
12 KiB
12 KiB
title | date | author | categories | |
---|---|---|---|---|
May, 2022 | 2022-05-04T09:13:39+03:00 | Alan Orth |
|
2022-05-04
- I found a few more IPs making requests using the shady Chrome 44 user agent in the last few days so I will add them to the block list too:
- 18.207.136.176
- 185.189.36.248
- 50.118.223.78
- 52.70.76.123
- 3.236.10.11
- Looking at the Solr statistics for 2022-04
- 52.191.137.59 is Microsoft, but they are using a normal user agent and making tens of thousands of requests
- 64.39.98.62 is owned by Qualys, and all their requests are probing for /etc/passwd etc
- 185.192.69.15 is in the Netherlands and is using a normal user agent, but making excessive automated HTTP requests to paths forbidden in robots.txt
- 157.55.39.159 is owned by Microsoft and identifies as bingbot so I don't know why its requests were logged in Solr
- 52.233.67.176 is owned by Microsoft and uses a normal user agent, but making excessive automated HTTP requests
- 157.55.39.144 is owned by Microsoft and uses a normal user agent, but making excessive automated HTTP requests
- 207.46.13.177 is owned by Microsoft and identifies as bingbot so I don't know why its requests were logged in Solr
- If I query Solr for
time:2022-04* AND dns:*msnbot* AND dns:*.msn.com.
I see a handful of IPs that made 41,000 requests
- I purged 93,974 hits from these IPs using my
check-spider-ip-hits.sh
script
- Now looking at the Solr statistics by user agent I see:
SomeRandomText
RestSharp/106.11.7.0
MetaInspector/5.7.0 (+https://github.com/jaimeiniesta/metainspector)
wp_is_mobile
Mozilla/5.0 (compatible; um-LN/1.0; mailto: techinfo@ubermetrics-technologies.com; Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1"
insomnia/2022.2.1
ZoteroTranslationServer
omgili/0.5 +http://omgili.com
curb
Sprout Social (Link Attachment)
- I purged 2,900 hits from these user agents from Solr using my
check-spider-hits.sh
script - I made a pull request to COUNTER-Robots for some of these agents
- In the mean time I will add them to our local overrides in DSpace
- Run all system updates on AReS server, update all Docker containers, and restart the server
- Start a harvest on AReS
2022-05-05
- Update PostgreSQL JDBC driver to 42.3.5 in the Ansible infrastructure playbooks and deploy on DSpace Test
- Peter asked me how many items we add to CGSpace every year
- I wrote a SQL query to check the number of items grouped by their accession dates since 2009:
localhost/dspacetest= ☘ SELECT EXTRACT(year from text_value::date) AS YYYY, COUNT(*) FROM metadatavalue WHERE metadata_field_id=11 GROUP BY YYYY ORDER BY YYYY DESC LIMIT 14;
yyyy │ count
──────┼───────
2022 │ 2073
2021 │ 6471
2020 │ 4074
2019 │ 7330
2018 │ 8899
2017 │ 6860
2016 │ 8451
2015 │ 15692
2014 │ 16479
2013 │ 4388
2012 │ 6472
2011 │ 2694
2010 │ 2457
2009 │ 293
- Note that I had an issue with casting
text_value
to date because one item had an accession date of2016
instead of2016-09-29T20:14:47Z
- Once I fixed that PostgreSQL was able to extract() the year
- There were some other methods I tried that worked also, for example
TO_DATE()
:
localhost/dspacetest= ☘ SELECT EXTRACT(year from TO_DATE(text_value, 'YYYY-MM-DD"T"HH24:MI:SS"Z"')) AS YYYY, COUNT(*) FROM metadatavalue WHERE metadata_field_id=11 GROUP BY YYYY ORDER BY YYYY DESC LIMIT 14;
- But it seems PostgreSQL is smart enough to recognize date formatting in strings automatically when we cast so we don't need to convert to date first
- Another thing I noticed is that a few hundred items have accession dates from decades ago, perhaps this is due to importing items from the CGIAR Library?
- I spent some time merging a few pull requests for DSpace 6.4 and porting one to
main
for DSpace 7.x - I also submitted a pull request to migrate Mirage 2's build from bower and compass to yarn and node-sass
2022-05-07
- Start a harvest on AReS
2022-05-09
- Submit an issue to Atmire's bug tracker inquiring about DSpace 6.4 support
2022-05-10
- Submit an updated pull request to migrate Mirage 2's build from bower and compass to npm and node-sass
- This one is better than the previous one because it uses npm directly, which comes with the Node.js distribution, rather than requiring the user to install yarn
- I also updated a bunch of grunt build deps
2022-05-12
- CGSpace meeting with Abenet and Peter
- We discussed the future of CGSpace and DSpace in general in the new One CGIAR
- We discussed how to prepare for bringing in content from the Initiatives, whether we need new metadata fields to support people from IFPRI etc
- We discussed the need for good quality Drupal and WordPress modules so sites can harvest content from the repository
- Peter asked me to send him a list of investors/funders/donors so he can clean it up, but also to try to align it with ROR and evntually do something like we do with country codes, adding the ROR IDs and potentially showing the badge on item views
- We also discussed removing some Mirage 2 themes for old programs and CRPs that don't have custom branding, ie only Google Analytics
- Export a list of donors for Peter to clean up:
localhost/dspacetest= ☘ \COPY (SELECT DISTINCT text_value as "cg.contributor.donor", count(*) FROM metadatavalue WHERE dspace_object_id IN (SELECT uuid FROM item) AND metadata_field_id = 248 GROUP BY text_value ORDER BY count DESC) to /tmp/2022-05-12-donors.csv WITH CSV HEADER;
COPY 1184
- Then I created a CSV from our
cg-creator-identifier.xml
controlled vocabulary and ran it against our database withadd-orcid-identifiers-csv.py
to see if any author names by chance matched that are missing ORCIDs in CGSpace
$ ./ilri/add-orcid-identifiers-csv.py -i /tmp/2022-05-12-add-orcids.csv -db dspace -u dspace -p 'fuuu' | tee /tmp/orcid.log
$ grep -c "Adding ORCID" /tmp/add-orcids.log
85
- So it's only eighty-five, but better than nothing...
- I removed the custom Mirage 2 themes for some old projects:
- AgriFood
- AVCD
- LIVES
- FeedTheFuture
- DrylandSystems
- TechnicalConsortium
- EADD
- That should knock off a few minutes of the maven build time!
- I generated a report from the AReS nginx logs on linode18:
# zcat --force /var/log/nginx/access.log.* | grep 'GET /explorer' | goaccess --log-format=COMBINED - -o /tmp/ares_report.html
2022-05-13
- Peter finalized the corrections on donors from yesterday so I extracted them into fix/delete CSVs and ran them on CGSpace:
$ ./ilri/fix-metadata-values.py -i 2022-05-13-fix-CGSpace-Donors.csv -db dspace -u dspace -p 'fuuu' -f cg.contributor.donor -m 248 -t correct -d
$ ./ilri/delete-metadata-values.py -i 2022-05-13-delete-CGSpace-Donors.csv -db dspace -u dspace -p 'fuuu' -f cg.contributor.donor -m 248 -d
- I cleaned up a few records manually (like some that had \r\n) then re-exported the donors and checked against the latest ROR dump:
$ ./ilri/ror-lookup.py -i /tmp/2022-05-13-donors.csv -r v1.0-2022-03-17-ror-data.json -o /tmp/2022-05-13-ror.csv
$ csvgrep -c matched -m true /tmp/2022-05-13-ror.csv | wc -l
230
$ csvgrep -c matched -m false /tmp/2022-05-13-ror.csv | csvcut -c organization > /tmp/2022-05-13-ror-unmatched.csv
- Then I sent Peter a list so he can try to update some from ROR
- I did some work to upgrade the Mirage 2 build dependencies in our
6_x-prod
branch- I switched to Node.js 14 also
- Meeting with Margarita and Manuel from ABC to discuss uploading ~6,000 automatically-generated CRP policy reports from MARLO to CGSpace
- They will try to provide the records and PDFs by mid June because they are still finalizing the reports for 2021
- MARLO will be going offline because it was for the CRPs
- We reviewed the metadata they have and gave them some advice on the formatting
- Once we upload the records I will need to provide them with a mapping of the MARLO URLs to Handle URLs so they can set up redirects
2022-05-14
- Start a full Discovery index
- Start an AReS harvest
2022-05-23
- Start an AReS harvest
2022-05-24
- Update CGSpace to latest
6_x-prod
branch, which removes a handful of Mirage 2 themes and migrates to Node.js 14 and some newer build deps - Run all system updates on CGSpace (linode18) and reboot it
2022-05-25
- Maria Garruccio sent me a handful of new ORCID identifiers for Alliance staff
- We currently have 1349 unique identifiers and this adds about forty-five new ones (!):
$ grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-identifier.xml | sort | uniq | wc -l
1349
$ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-identifier.xml /tmp/new-abc-orcids.txt | grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort | uniq > /tmp/2022-05-25-combined-orcids.txt
$ wc -l /tmp/2022-05-25-combined-orcids.txt
1395 /tmp/2022-05-25-combined-orcids.txt
- After combining and filtering them I resolved their names using my
resolve-orcids.py
script:
$ ./ilri/resolve-orcids.py -i /tmp/2022-05-25-combined-orcids.txt -o /tmp/2022-05-25-combined-orcids-names.txt
- There are some names that changed, so I need to run them through the
fix-metadata-values.py
script:
$ cat 2022-05-25-update-orcids.csv
cg.creator.identifier,correct
"Andrea Fongar: 0000-0003-2084-1571","ANDREA CECILIA SANCHEZ BOGADO: 0000-0003-4549-6970"
"Bekele Shiferaw: 0000-0002-3645-320X","Bekele A. Shiferaw: 0000-0002-3645-320X"
"Henry Kpaka: 0000-0002-7480-2933","Henry Musa Kpaka: 0000-0002-7480-2933"
"Josephine Agogbua: 0000-0001-6317-1227","Josephine Udunma Agogbua: 0000-0001-6317-1227"
"Martha Lilia Del Río Duque: 0000-0002-0879-0292","Martha Del Río: 0000-0002-0879-0292"
$ ./ilri/fix-metadata-values.py -i 2022-05-25-update-orcids.csv -db dspace -u dspace -p 'fuuu' -f cg.creator.identifier -m 247 -t correct -d -n
Connected to database.
Would fix 4 occurences of: Andrea Fongar: 0000-0003-2084-1571
Would fix 1 occurences of: Bekele Shiferaw: 0000-0002-3645-320X
Would fix 2 occurences of: Josephine Agogbua: 0000-0001-6317-1227
Would fix 34 occurences of: Martha Lilia Del Río Duque: 0000-0002-0879-0292
2022-05-26
- I extracted the names and ORCID identifiers from Maria's spreadsheet and produced several CSV files with different name formats:
- First Last (GREL:
cells['First Name'].value + ' ' + cells['Surname'].value
) - Last, First (GREL:
cells['Surname'].value + ", " + cells['First Name'].value
) - Last, F. (GREL:
cells['Surname'].value + ", " + cells['First Name'].value.substring(0, 1) + "."
)
- First Last (GREL:
- Then I constructed a CSV for each of these variations to use with
add-orcid-identifiers-csv.py
- In total I matched a bunch of authors and added 872 new metadata fields!
2022-05-27
- Send a follow up to Leroy from the Alliance to ask about the CIAT Library URLs
- It seems that I forgot to attach the list of PDFs when I last communicated with him in 2022-03
- Meeting with Terry Bucknell from Overton.io
2022-05-28
- Start a harvest on AReS
2022-05-30
- Help IITA with some collection authorization issues on CGSpace
- Finally looking into Peter's Altmetric export from 2022-02
- We want to try to compare some of the information about open access status with that in CGSpace
- I created a new column for all items that have CGSpace handles using this GREL:
"https://hdl.handle.net/" + value.match(/.*?(10568\/\d+).*?/)[0]
- With that I can do a join on the CGSpace metadata and perhaps clean up some items
$ ./bin/dspace metadata-export -f 2022-05-30-cgspace.csv
$ csvcut -c 'id,dc.identifier.uri[en_US],dcterms.accessRights[en_US],dcterms.license[en_US]' 2022-05-30-cgspace.csv | sed '1 s/dc\.identifier\.uri\[en_US\]/dc.identifier.uri/' > /tmp/cgspace.csv
$ csvjoin -c 'dc.identifier.uri' ~/Downloads/2022-05-30-Altmetric-Research-Outputs-CGSpace.csv /tmp/cgspace.csv > /tmp/cgspace-altmetric.csv
- Examining the data in OpenRefine I spot checked a few records where Altmetric and CGSpace disagree and in most cases I found Altmetric to be wrong...