mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2024-11-21 22:25:02 +01:00
83 lines
4.7 KiB
Markdown
83 lines
4.7 KiB
Markdown
---
|
|
title: "October, 2024"
|
|
date: 2024-10-03T11:01:00+03:00
|
|
author: "Alan Orth"
|
|
categories: ["Notes"]
|
|
---
|
|
|
|
## 2024-10-03
|
|
|
|
- I had an idea to get abstracts from OpenAlex
|
|
- For [copyright reasons they don't include plain abstracts](https://docs.openalex.org/api-entities/works/work-object#abstract_inverted_index), but the [pyalex](https://github.com/J535D165/pyalex) library can convert them on the fly
|
|
|
|
<!--more-->
|
|
|
|
- I filtered for journal articles that were Creative Commons and missing abstracts:
|
|
|
|
```console
|
|
$ csvcut -c 'id,dc.title[en_US],dcterms.abstract[en_US],cg.identifier.doi[en_US],dcterms.type[en_US],dcterms.language[en_US],dcterms.license[en_US]' ~/Downloads/2024-09-30-cgspace.csv | csvgrep -c 'dcterms.type[en_US]' -r '^Journal Article$' | csvgrep -c 'cg.identifier.doi[en_US]' -r '^.+$' | csvgrep -c 'dcterms.license[en_US]' -r '^CC-' | csvgrep -c 'dcterms.abstract[en_US]' -r '^$' | csvgrep -c 'dcterms.language[en_US]' -r '^en$' | grep -v "||" | grep -v -- '-ND' | grep -v -E 'https://doi.org/10.(2499|4160|17528)/' > /tmp/missing-abstracts.csv
|
|
```
|
|
|
|
- Then wrote a script to get them from OpenAlex
|
|
- After inspecting and cleaning a few dozen up in OpenRefine (removing "Keywords:" and copyright, and HTML entities, etc) I managed to get about 440
|
|
|
|
## 2024-10-06
|
|
|
|
- Since I increase Solr's heap from 2 to 3G a few weeks ago it seems like Solr is always using 100% CPU
|
|
- I don't understand this because it was running well before, and I only increased it in anticipation of running the dspace-statistics-api-js, though never got around to it
|
|
- I just realized that this may be related to the JMX monitoring, as I've seen gaps in the Grafana dashboards and remember that it took surprisingly long to scrape the metrics
|
|
- Maybe I need to change the scrape interval
|
|
|
|
## 2024-10-08
|
|
|
|
- I checked the VictoriaMetrics vmagent dashboard and saw that there were thousands of errors scraping the `jvm_solr` target from Solr
|
|
- So it seems like I do need to change the scrape interval
|
|
- I will increase it from 15s (global) to 20s for that job
|
|
- Reading some documentation I found [this reference from Brian Brazil that discusses this very problem](https://www.robustperception.io/keep-it-simple-scrape_interval-id/)
|
|
- He recommends keeping a single scrape interval for all targets, but also checking the slow exporter (`jmx_exporter` in this case) and seeing if we can limit the data we scrape
|
|
- To keep things simple for now I will increase the global scrape interval to 20s
|
|
- Long term I should limit the metrics...
|
|
- Oh wow, I found out that [Solr ships with a Prometheus exporter!](https://solr.apache.org/guide/8_11/monitoring-solr-with-prometheus-and-grafana.html) and even includes a Grafana dashboard
|
|
- I'm trying to run the Solr prometheus-exporter as a one-off systemd unit to test it:
|
|
|
|
```console
|
|
# cd /opt/solr-8.11.3/contrib/prometheus-exporter
|
|
# systemd-run --uid=victoriametrics --gid=victoriametrics --working-directory=/opt/solr-8.11.3/contrib/prometheus-exporter ./bin/solr-exporter -p 9854 -b http://localhost:8983/solr -f ./conf/solr-exporter-config.xml -s 20
|
|
```
|
|
|
|
- The default scrape interval is 60 seconds, so if we scrape it more than that the metrics will be stale
|
|
- From what I've seen this returns in less than one second so it should be safe to reduce the scrape interval
|
|
|
|
## 2024-10-19
|
|
|
|
- Heavy load on CGSpace today
|
|
- There is a noted increase just before 4PM local time
|
|
- I extracted a list of IPs:
|
|
|
|
```console
|
|
# grep -E '19/Oct/2024:1[567]' /var/log/nginx/api-access.log | awk '{print $1}' | sort -u > /tmp/ips.txt
|
|
```
|
|
|
|
- I looked them up and found some data center IPs that were using normal user agents with hundreds of IPs, for example:
|
|
- 154.47.29.168 # 212238 (CDNEXT - Datacamp Limited, GB)
|
|
- 91.210.64.12 # 29802 (HVC-AS, US) - HIVELOCITY, Inc.
|
|
- 103.221.57.120 # 132817 (DZCRD-AS-AP DZCRD Networks Ltd, BD)
|
|
- 109.107.150.136 # 201341 (CENTURION-INTERNET-SERVICES - trafficforce, UAB, LT) - Code200
|
|
- 185.210.207.1 # 209709 (CODE200-ISP1 - UAB code200, LT)
|
|
- 185.162.119.101 # 207223 (GLOBALCON - Global Connections Network LLC, US)
|
|
- 173.244.35.101 # 64286 (LOGICWEB, US) - Tesonet
|
|
- 139.28.160.141 # 396319 (US-INTERNET-396319, US) - OxyLabs
|
|
- 104.143.89.112 # 62874 (WEB2OBJECTS, US) - Web2Objects LLC
|
|
- I added some network blocks to the nginx conf
|
|
- Interestingly, I see so many IPs using the same user agent today:
|
|
|
|
```console
|
|
# grep "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.3" /var/log/nginx/api-access.log | awk '{print $1}' | sort -u | wc -l
|
|
767
|
|
```
|
|
|
|
- For reference, the current Chrome version is 129 or so...
|
|
- This is definitely worth looking into because it seems like one massive botnet
|
|
|
|
<!-- vim: set sw=2 ts=2: -->
|