2022-06-06
- Look at the Solr statistics on CGSpace
- I see 167,000 hits from a bunch of Microsoft IPs with reverse DNS “msnbot-” using the Solr query
dns:*msnbot* AND dns:*.msn.com
- I purged these first so I could see the other “real” IPs in the Solr facets
- I see 47,500 hits from 80.248.237.167 on a data center ISP in Sweden, using a normal user agent
- I see 13,000 hits from 163.237.216.11 on a data center ISP in Australia, using a normal user agent
- I see 7,300 hits from 208.185.238.57 from Britanica, using a normal user agent
- There seem to be many more of these:
# zcat --force /var/log/nginx/access.log* | grep 208.185.238. | awk '{print $1}' | sort | uniq -c | sort -h
2 208.185.238.1
166 208.185.238.54
1293 208.185.238.51
2587 208.185.238.59
4692 208.185.238.56
5480 208.185.238.53
6277 208.185.238.52
6400 208.185.238.58
8261 208.185.238.55
17549 208.185.238.57
- I see 3,000 hits from 178.208.75.33 by a Russian-owned IP in the Netherlands that is making a GET to / every one minute, using a normal user agent
- I see 3,000 hits from 134.122.124.196 on Digital Ocean to the REST API with a normal user agent
- I purged all these hits from IPs for a total of about 265,000
- Then I faceted by user agent and found
- 1,000 hits by
insomnia/2022.2.1
, which I also saw last month and submitted to COUNTER-Robots
- 265 hits by
omgili/0.5 +http://omgili.com
- 150 hits by
Vizzit
- 132 hits by
MetaInspector/5.7.0 (+https://github.com/jaimeiniesta/metainspector)
- 73 hits by
Scoop.it
- 62 hits by
bitdiscovery
- 59 hits by
Asana/1.4.0 WebsiteMetadataRetriever
- 32 hits by
Sprout Social (Link Attachment)
- 29 hits by
CyotekWebCopy/1.9 CyotekHTTP/6.2
- 20 hits by
Hootsuite-Authoring/1.0
- I purged about 4,100 hits from these user agents
- Run all system updates on AReS server (linode20) and reboot
- I want to try to update some of the build dependencies of OpenRXV since Node.js 12 is no longer supported
- Upgrade linode20 to Ubuntu 22.04 and start an AReS harvest
- I merged the Mirage 2 build fix to
dspace-6_x
for DSpace 6.4
2022-06-07
- I tested Node.js 14 one more time with vanilla DSpace 6.4-SNAPSHOT and with the CGSpace source and it worked well
- I made a pull request to DSpace to use Node.js 14 for Mirage 2
- I even tested Node.js 16 and it works, but that is enough for now…
2022-06-08
- Work on AReS a bit since I wasn’t able to harvest after doing the updates on the server and in the containers a few days ago
- I don’t know what the problem was really, but on the server I had to enable IPv4 forwarding so the frontend container would build
- Once I downed and upped AReS with docker-compose I was able to start a new harvest
- I also did some tests to enable ES2020 target in the backend because we’re on Node.js 14 there now
2022-06-13
- Create a user for Mohammed Salem to test MEL submission on DSpace Test:
$ dspace user -a -m mel-submit@cgiar.org -g MEL -s Submit -p 'owwwwwwww'
- According to my notes from 2020-10 the account must be in the admin group in order to submit via the REST API
2022-06-14
2022-06-16
- Francesca asked us to add the CC-BY-3.0-IGO license to the submission form on CGSpace
- I remember I had requested SPDX to add CC-BY-NC-ND-3.0-IGO in 2019-02, and they finally merged it in 2020-07, but I never added it to CGSpace
- I will add the full suite of CC 3.0 IGO licenses to CGSpace and then make a request to SPDX for the others:
- CC-BY-3.0-IGO
- CC-BY-SA-3.0-IGO
- CC-BY-ND-3.0-IGO
- CC-BY-NC-3.0-IGO
- CC-BY-NC-SA-3.0-IGO
- CC-BY-NC-ND-3.0-IGO
- I filed an issue asking for SPDX to add CC-BY-3.0-IGO
- Meeting with Moayad from CodeObia to discuss OpenRXV
- He added the ability to use multiple indexes / dashboards, and to be able to embed them in iframes
- Add
cg.contributor.initiative
with a controlled vocabulary based on CLARISA’s list to the CGSpace submission form
- Switch to the
linux-virtual-hwe-20.04
kernel on CGSpace (linode18), run all system updates, and reboot
2022-06-17
- I noticed a few ORCID identifiers missing for some scientists so I added them to the controlled vocabulary and then tagged them on CGSpace:
$ cat 2022-06-17-add-orcids.csv
dc.contributor.author,cg.creator.identifier
"Tijjani, A.","Abdulfatai Tijjani: 0000-0002-0793-9059"
"Tijjani, Abdulfatai","Abdulfatai Tijjani: 0000-0002-0793-9059"
"Mrode, Raphael A.","Raphael Mrode: 0000-0003-1964-5653"
"Okeyo Mwai, Ally","Ally Okeyo Mwai: 0000-0003-2379-7801"
"Ojango, Julie M.K.","Ojango J.M.K.: 0000-0003-0224-5370"
"Prendergast, J.G.D.","James Prendergast: 0000-0001-8916-018X"
"Ekine-Dzivenu, Chinyere","Chinyere Ekine-Dzivenu: 0000-0002-8526-435X"
"Ekine, C.","Chinyere Ekine-Dzivenu: 0000-0002-8526-435X"
"Ekine-Dzivenu, C.C","Chinyere Ekine-Dzivenu: 0000-0002-8526-435X"
"Shilomboleni, Helena","Helena Shilomboleni: 0000-0002-9875-6484"
$ ./ilri/add-orcid-identifiers-csv.py -i /tmp/2022-06-17-add-orcids.csv -db dspace -u dspace -p 'fuuu' | tee /tmp/orcids.log
$ grep -c 'Adding ORCID' /tmp/orcids2.log
304
- Also make some changes to the Discovery facets and item view
- I reduced the number of items to show for CRP facets from 20 to 5
- I added a facet for the Initiatives
- I re-organized a few parts of the item view to add Action Areas and the list of author affiliations
2022-06-18
- I deployed the changes on CGSpace and started a full Discovery index for the new Initiatives facet
- Run
dspace cleanup -v
on CGSpace
2022-06-20
- Add missing ORCID identifier for ILRI staff to CGSpace and tag their items
2022-06-21
- Work on OpenRXV backend dependencies
- Update Elasticsearch and TypeScript and eslint
- Sit in on webinar about contributing terms to AGROVOC
- I agreed that I would send Sara Jani from ICARDA a list of new terms we have that don’t match AGROVOC by end of June
- I need to indicate which center is using them so we can have an appropriate expert review the terms
2022-06-22
- I re-deployed AReS with the latest OpenRXV changes then started a fresh harvest
- Meeting with Salem to discuss metadata between CGSpace and MEL
- We started working through his spreadsheet and then the Internet dropped
2022-06-23
- Start looking at country names between MEL, CGSpace, and standards like UN M.49 and GeoNames
- I used
xmllint
to extract the countries from CGSpace’s input forms:
$ xmllint --xpath '//value-pairs[@value-pairs-name="countrylist"]/pair/stored-value/node()' dspace/config/input-forms.xml > /tmp/cgspace-countries.txt
- Then I wrote a Python script (
countries-to-csv.py
) to read them and save their names alongside the ISO 3166-1 Alpha2 code
- Then I joined them with the other lists:
$ csvjoin --outer -c alpha2 ~/Downloads/clarisa-countries.csv ~/Downloads/UNSD\ —\ Methodology.csv ~/Downloads/geonames-countries.csv /tmp/cgspace-countries.csv /tmp/mel-countries.csv> /tmp/countries.csv
- This mostly worked fine, and is much easier than writing another Python script with Pandas…
2022-06-24
- Spent some more time working on my
countries-to-csv.py
script to fix some logic errors
- Then re-export the UN M.49 countries to a clean list because the one I did yesterday somehow has errors:
csvcut -d ';' -c 'ISO-alpha2 Code,Country or Area' ~/Downloads/UNSD\ —\ Methodology.csv | sed -e '1s/ISO-alpha2 Code/alpha2/' -e '1s/Country or Area/UN M.49 Name/' > ~/Downloads/un-countries.csv
- Check the number of lines in each file:
$ wc -l clarisa-countries.csv un-countries.csv cgspace-countries.csv mel-countries.csv
250 clarisa-countries.csv
250 un-countries.csv
198 cgspace-countries.csv
258 mel-countries.csv
- I am seeing strange results with csvjoin’s
--outer
join that I need to keep unmatched terms from both left and right files…
- Using
xsv join --full
is giving me better results:
$ xsv join --full alpha2 ~/Downloads/clarisa-countries.csv alpha2 ~/Downloads/un-countries.csv | xsv select '!alpha2[1]' > /tmp/clarisa-un-xsv-full.csv
- Then adding the CGSpace and MEL countries:
$ xsv join --full alpha2 /tmp/clarisa-un-xsv-full.csv alpha2 /tmp/cgspace-countries.csv | xsv select '!alpha2[1]' > /tmp/clarisa-un-cgspace-xsv-full.csv
$ xsv join --full alpha2 /tmp/clarisa-un-cgspace-xsv-full.csv alpha2 /tmp/mel-countries.csv | xsv select '!alpha2[1]' > /tmp/clarisa-un-cgspace-mel-xsv-full.csv
2022-06-26
2022-06-28
- Start working on the CGSpace subject export for FAO
- First I exported a list of all metadata in our
dcterms.subject
and other center-specific subject fields with their counts:
localhost/dspacetest= ☘ \COPY (SELECT DISTINCT text_value AS "subject", count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (187, 120, 210, 122, 215, 127, 208, 124, 128, 123, 125, 135, 203, 236, 238, 119) GROUP BY "subject" ORDER BY count DESC) to /tmp/2022-06-28-cgspace-subjects.csv WITH CSV HEADER;
COPY 27010
- Then I extracted the subjects and looked them up against AGROVOC:
$ csvcut -c subject /tmp/2022-06-28-cgspace-subjects.csv | sed '1d' > /tmp/2022-06-28-cgspace-subjects.txt
$ ./ilri/agrovoc-lookup.py -i /tmp/2022-06-28-cgspace-subjects.txt -o /tmp/2022-06-28-cgspace-subjects-results.csv
- I keep getting timeouts after every five or ten requests, so this will not be feasible for 27,000 subjects!
- I think I will have to write some custom script to use the AGROVOC RDF file
- Using rdflib to open the 1.2GB
agrovoc_lod.rdf
file takes several minutes and doesn’t seem very efficient
- I tried using lightrdf and it’s much quicker, but the documentation is limiting and I’m not sure how to search yet
- I had to try in different Python versions because 3.10.x is apparently too new
- For future reference I was able to search with lightrdf:
import lightrdf
parser = lightrdf.Parser()
# prints millions of lines
for triple in parser.parse("./agrovoc_lod.rdf", base_iri=None):
print(triple)
agrovoc = lightrdf.RDFDocument('agrovoc_lod.rdf');
# all results for prefix http://aims.fao.org/aos/agrovoc/c_5
for triple in agrovoc.search_triples('http://aims.fao.org/aos/agrovoc/c_5', None, None):
print(triple)
('http://aims.fao.org/aos/agrovoc/c_5', 'http://www.w3.org/2004/02/skos/core#altLabel', '"Abalone"@de')
('http://aims.fao.org/aos/agrovoc/c_5', 'http://www.w3.org/2004/02/skos/core#prefLabel', '"abalones"@en')
# all stuff for abalones in English
for triple in agrovoc.search_triples(None, None, '"abalones"@en'):
print(triple)
- I ran the
agrovoc-lookup.py
from a Linode server and it completed without issues… hmmm
2022-06-29
- Continue working on the list of non-AGROVOC subject to report to FAO
- I got a one liner to get the list of non-AGROVOC subjects and join them with their counts (updated to use regex in csvgrep):
$ csvgrep -c 'number of matches' -r '^0$' /tmp/2022-06-28-cgspace-subjects-results.csv \
| csvcut -c subject \
| csvjoin -c subject /tmp/2022-06-28-cgspace-subjects.csv - \
> /tmp/2022-06-28-cgspace-non-agrovoc.csv
2022-06-30
- Check some AfricaRice records for potential duplicates on CGSpace for Abenet:
$ csvcut -l -c dc.title,dcterms.issued,dcterms.type ~/Downloads/Africarice_2ndBatch_ay.csv | sed '1s/line_number/id/' > /tmp/africarice.csv
$ csv-metadata-quality -i /tmp/africarice.csv -o /tmp/africarice-cleaned.csv -u
$ ./ilri/check-duplicates.py -i /tmp/africarice-cleaned.csv -u dspacetest -db dspacetest -p 'dom@in34sniper' -o /tmp/africarice-duplicates.csv
- Looking at the non-AGROVOC subjects again, I see some in our list that are duplicated in uppercase and lowercase, so I will run it again with all lowercase:
localhost/dspacetest= ☘ \COPY (SELECT DISTINCT(lower(text_value)) AS "subject", count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (187, 120, 210, 122, 215, 127, 208, 124, 128, 123, 125, 135, 203, 236, 238, 119) GROUP BY "subject" ORDER BY count DESC) to /tmp/2022-06-30-cgspace-subjects.csv WITH CSV HEADER;
- Also, I see there might be something wrong with my csvjoin because nigeria shows up in the final list as having not matched…
- Ah, I was using
csvgrep -m 0
to find rows that didn’t match, but that also matched items that had 10, 100, 50, etc…
- We need to use a regex:
$ csvgrep -c 'number of matches' -r '^0$' /tmp/2022-06-30-cgspace-subjects-results.csv \
| csvcut -c subject \
| csvjoin -c subject /tmp/2022-06-30-cgspace-subjects.csv - \
> /tmp/2022-06-30-cgspace-non-agrovoc.csv
- Then I took all the terms with fifty or more occurences and put them on a Google Sheet
- There I started removing any term that was a variation of an existing AGROVOC term (like cowpea/cowpeas, policy/policies) or a compound concept
- pnbecker on DSpace Slack mentioned that they made a JSPUI deduplication step that is open source: https://github.com/the-library-code/deduplication
- It uses Levenshtein distance via PostgreSQL’s fuzzystrmatch extension