mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2025-01-27 05:49:12 +01:00
Add notes for 2024-10-08
This commit is contained in:
@ -21,4 +21,31 @@ $ csvcut -c 'id,dc.title[en_US],dcterms.abstract[en_US],cg.identifier.doi[en_US]
|
||||
- Then wrote a script to get them from OpenAlex
|
||||
- After inspecting and cleaning a few dozen up in OpenRefine (removing "Keywords:" and copyright, and HTML entities, etc) I managed to get about 440
|
||||
|
||||
## 2024-10-06
|
||||
|
||||
- Since I increase Solr's heap from 2 to 3G a few weeks ago it seems like Solr is always using 100% CPU
|
||||
- I don't understand this because it was running well before, and I only increased it in anticipation of running the dspace-statistics-api-js, though never got around to it
|
||||
- I just realized that this may be related to the JMX monitoring, as I've seen gaps in the Grafana dashboards and remember that it took surprisingly long to scrape the metrics
|
||||
- Maybe I need to change the scrape interval
|
||||
|
||||
## 2024-10-08
|
||||
|
||||
- I checked the VictoriaMetrics vmagent dashboard and saw that there were thousands of errors scraping the `jvm_solr` target from Solr
|
||||
- So it seems like I do need to change the scrape interval
|
||||
- I will increase it from 15s (global) to 20s for that job
|
||||
- Reading some documentation I found [this reference from Brian Brazil that discusses this very problem](https://www.robustperception.io/keep-it-simple-scrape_interval-id/)
|
||||
- He recommends keeping a single scrape interval for all targets, but also checking the slow exporter (`jmx_exporter` in this case) and seeing if we can limit the data we scrape
|
||||
- To keep things simple for now I will increase the global scrape interval to 20s
|
||||
- Long term I should limit the metrics...
|
||||
- Oh wow, I found out that [Solr ships with a Prometheus exporter!](https://solr.apache.org/guide/8_11/monitoring-solr-with-prometheus-and-grafana.html) and even includes a Grafana dashboard
|
||||
- I'm trying to run the Solr prometheus-exporter as a one-off systemd unit to test it:
|
||||
|
||||
```console
|
||||
# cd /opt/solr-8.11.3/contrib/prometheus-exporter
|
||||
# systemd-run --uid=victoriametrics --gid=victoriametrics --working-directory=/opt/solr-8.11.3/contrib/prometheus-exporter ./bin/solr-exporter -p 9854 -b http://localhost:8983/solr -f ./conf/solr-exporter-config.xml -s 20
|
||||
```
|
||||
|
||||
- The default scrape interval is 60 seconds, so if we scrape it more than that the metrics will be stale
|
||||
- From what I've seen this returns in less than one second so it should be safe to reduce the scrape interval
|
||||
|
||||
<!-- vim: set sw=2 ts=2: -->
|
||||
|
Reference in New Issue
Block a user