mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2025-01-27 05:49:12 +01:00
Add notes for 2021-09-13
This commit is contained in:
@ -36,7 +36,7 @@ I simply started it and AReS was running again:
|
||||
|
||||
|
||||
"/>
|
||||
<meta name="generator" content="Hugo 0.87.0" />
|
||||
<meta name="generator" content="Hugo 0.88.1" />
|
||||
|
||||
|
||||
|
||||
@ -132,7 +132,7 @@ I simply started it and AReS was running again:
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ docker-compose -f docker/docker-compose.yml start angular_nginx
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ docker-compose -f docker/docker-compose.yml start angular_nginx
|
||||
</code></pre><ul>
|
||||
<li>Margarita from CCAFS emailed me to say that workflow alerts haven’t been working lately
|
||||
<ul>
|
||||
@ -152,7 +152,7 @@ I simply started it and AReS was running again:
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code>https://cgspace.cgiar.org/open-search/discover?query=subject:water%20scarcity&scope=10568/16814&order=DESC&rpp=100&sort_by=2&start=1
|
||||
<pre tabindex="0"><code>https://cgspace.cgiar.org/open-search/discover?query=subject:water%20scarcity&scope=10568/16814&order=DESC&rpp=100&sort_by=2&start=1
|
||||
</code></pre><ul>
|
||||
<li>That will sort by date issued (see: <code>webui.itemlist.sort-option.2</code> in dspace.cfg), give 100 results per page, and start on item 1</li>
|
||||
<li>Otherwise, another alternative would be to use the IWMI CSV that we are already exporting every week</li>
|
||||
@ -162,7 +162,7 @@ I simply started it and AReS was running again:
|
||||
<ul>
|
||||
<li>The Elasticsearch indexes are messed up so I dumped and re-created them correctly:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">curl -XDELETE 'http://localhost:9200/openrxv-items-final'
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">curl -XDELETE 'http://localhost:9200/openrxv-items-final'
|
||||
curl -XDELETE 'http://localhost:9200/openrxv-items-temp'
|
||||
curl -XPUT 'http://localhost:9200/openrxv-items-final'
|
||||
curl -XPUT 'http://localhost:9200/openrxv-items-temp'
|
||||
@ -208,7 +208,7 @@ elasticdump --input=/home/aorth/openrxv-items_data.json --output=http://localhos
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ podman unshare chown 1000:1000 /home/aorth/.local/share/containers/storage/volumes/docker_esData_7/_data
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ podman unshare chown 1000:1000 /home/aorth/.local/share/containers/storage/volumes/docker_esData_7/_data
|
||||
</code></pre><ul>
|
||||
<li>The new OpenRXV harvesting method by Moayad uses pages of 10 items instead of 100 and it’s much faster
|
||||
<ul>
|
||||
@ -231,7 +231,7 @@ elasticdump --input=/home/aorth/openrxv-items_data.json --output=http://localhos
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-items_data.json | awk -F: '{print $2}' | wc -l
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-items_data.json | awk -F: '{print $2}' | wc -l
|
||||
90459
|
||||
$ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-items_data.json | awk -F: '{print $2}' | sort | uniq | wc -l
|
||||
90380
|
||||
@ -255,11 +255,11 @@ $ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-it
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ dspace metadata-export -i 10568/16814 -f /tmp/2021-06-20-IWMI.csv
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ dspace metadata-export -i 10568/16814 -f /tmp/2021-06-20-IWMI.csv
|
||||
</code></pre><ul>
|
||||
<li>Then I used <code>csvcut</code> to extract just the columns I needed and do the replacement into a new CSV:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ csvcut -c 'id,dcterms.subject[],dcterms.subject[en_US]' /tmp/2021-06-20-IWMI.csv | sed 's/farmer managed irrigation systems/farmer-led irrigation/' > /tmp/2021-06-20-IWMI-new-subjects.csv
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ csvcut -c 'id,dcterms.subject[],dcterms.subject[en_US]' /tmp/2021-06-20-IWMI.csv | sed 's/farmer managed irrigation systems/farmer-led irrigation/' > /tmp/2021-06-20-IWMI-new-subjects.csv
|
||||
</code></pre><ul>
|
||||
<li>Then I uploaded the resulting CSV to CGSpace, updating 161 items</li>
|
||||
<li>Start a harvest on AReS</li>
|
||||
@ -278,7 +278,7 @@ $ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-it
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ grep -E '"repo":"CGSpace"' openrxv-items_data.json | grep -oE '"handle":"[[:digit:]]+/[[:alnum:]]+"' | wc -l
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ grep -E '"repo":"CGSpace"' openrxv-items_data.json | grep -oE '"handle":"[[:digit:]]+/[[:alnum:]]+"' | wc -l
|
||||
90937
|
||||
$ grep -E '"repo":"CGSpace"' openrxv-items_data.json | grep -oE '"handle":"[[:digit:]]+/[[:alnum:]]+"' | sort -u | wc -l
|
||||
85709
|
||||
@ -289,7 +289,7 @@ $ grep -E '"repo":"CGSpace"' openrxv-items_data.json | grep
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ grep -E '"repo":"CGSpace"' openrxv-items_data.json | grep -oE '"handle":"[[:digit:]]+/[[:alnum:]]+"' | sort | uniq -c | sort -h
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ grep -E '"repo":"CGSpace"' openrxv-items_data.json | grep -oE '"handle":"[[:digit:]]+/[[:alnum:]]+"' | sort | uniq -c | sort -h
|
||||
</code></pre><ul>
|
||||
<li>Unfortunately I found no pattern:
|
||||
<ul>
|
||||
@ -312,7 +312,7 @@ $ grep -E '"repo":"CGSpace"' openrxv-items_data.json | grep
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ curl -s -H "Accept: application/json" "https://demo.dspace.org/rest/items?offset=0&limit=5" | jq length
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ curl -s -H "Accept: application/json" "https://demo.dspace.org/rest/items?offset=0&limit=5" | jq length
|
||||
5
|
||||
$ curl -s -H "Accept: application/json" "https://demo.dspace.org/rest/items?offset=0&limit=5" | jq '.[].handle'
|
||||
"10673/4"
|
||||
@ -355,7 +355,7 @@ $ curl -s -H "Accept: application/json" "https://demo.dspace.org/
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-items_data-local-ds-4065.json | wc -l
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-items_data-local-ds-4065.json | wc -l
|
||||
90327
|
||||
$ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-items_data-local-ds-4065.json | sort -u | wc -l
|
||||
90317
|
||||
@ -368,7 +368,7 @@ $ grep -oE '"handle":"[[:digit:]]+/[[:digit:]]+"' openrxv-it
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ ./ilri/check-spider-hits.sh -f dspace/config/spiders/agents/ilri -p
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ ./ilri/check-spider-hits.sh -f dspace/config/spiders/agents/ilri -p
|
||||
Purging 1339 hits from RI\/1\.0 in statistics
|
||||
Purging 447 hits from crusty in statistics
|
||||
Purging 3736 hits from newspaper in statistics
|
||||
@ -397,7 +397,7 @@ Total number of bot hits purged: 5522
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console"># journalctl --since=today -u tomcat7 | grep -c 'Connection has been abandoned'
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console"># journalctl --since=today -u tomcat7 | grep -c 'Connection has been abandoned'
|
||||
978
|
||||
$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
||||
10100
|
||||
@ -412,16 +412,16 @@ $ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid =
|
||||
</li>
|
||||
<li>After upgrading and restarting Tomcat the database connections and locks were back down to normal levels:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;' | wc -l
|
||||
63
|
||||
</code></pre><ul>
|
||||
<li>Looking in the DSpace log, the first “pool empty” message I saw this morning was at 4AM:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">2021-06-23 04:01:14,596 ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper @ [http-bio-127.0.0.1-8443-exec-4323] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:250; busy:250; idle:0; lastwait:5000].
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">2021-06-23 04:01:14,596 ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper @ [http-bio-127.0.0.1-8443-exec-4323] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:250; busy:250; idle:0; lastwait:5000].
|
||||
</code></pre><ul>
|
||||
<li>Oh, and I notice 8,000 hits from a Flipboard bot using this user-agent:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:49.0) Gecko/20100101 Firefox/49.0 (FlipboardProxy/1.2; +http://flipboard.com/browserproxy)
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:49.0) Gecko/20100101 Firefox/49.0 (FlipboardProxy/1.2; +http://flipboard.com/browserproxy)
|
||||
</code></pre><ul>
|
||||
<li>We can purge them, as this is not user traffic: <a href="https://about.flipboard.com/browserproxy/">https://about.flipboard.com/browserproxy/</a>
|
||||
<ul>
|
||||
@ -448,7 +448,7 @@ $ psql -c 'SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid =
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' cgspace-openrxv-items-temp-backup.json | wc -l
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' cgspace-openrxv-items-temp-backup.json | wc -l
|
||||
104797
|
||||
$ grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' cgspace-openrxv-items-temp-backup.json | sort | uniq | wc -l
|
||||
99186
|
||||
@ -456,7 +456,7 @@ $ grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' cgspa
|
||||
<li>This number is probably unique for that particular harvest, but I don’t think it represents the true number of items…</li>
|
||||
<li>The harvest of DSpace Test I did on my local test instance yesterday has about 91,000 items:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ grep -E '"repo":"DSpace Test"' 2021-06-23-openrxv-items-final-local.json | grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' | sort | uniq | wc -l
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ grep -E '"repo":"DSpace Test"' 2021-06-23-openrxv-items-final-local.json | grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' | sort | uniq | wc -l
|
||||
90990
|
||||
</code></pre><ul>
|
||||
<li>So the harvest on the live site is missing items, then why didn’t the add missing items plugin find them?!
|
||||
@ -469,7 +469,7 @@ $ grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' cgspa
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">172.104.229.92 - - [24/Jun/2021:07:52:58 +0200] "GET /sitemap HTTP/1.1" 503 190 "-" "OpenRXV harvesting bot; https://github.com/ilri/OpenRXV"
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">172.104.229.92 - - [24/Jun/2021:07:52:58 +0200] "GET /sitemap HTTP/1.1" 503 190 "-" "OpenRXV harvesting bot; https://github.com/ilri/OpenRXV"
|
||||
</code></pre><ul>
|
||||
<li>I fixed nginx so it always allows people to get the sitemap and then re-ran the plugins… now it’s checking 180,000+ handles to see if they are collections or items…
|
||||
<ul>
|
||||
@ -478,7 +478,7 @@ $ grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' cgspa
|
||||
</li>
|
||||
<li>According to the api logs we will be adding 5,697 items:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ docker logs api 2>/dev/null | grep dspace_add_missing_items | sort | uniq | wc -l
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ docker logs api 2>/dev/null | grep dspace_add_missing_items | sort | uniq | wc -l
|
||||
5697
|
||||
</code></pre><ul>
|
||||
<li>Spent a few hours with Moayad troubleshooting and improving OpenRXV
|
||||
@ -496,7 +496,7 @@ $ grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' cgspa
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ redis-cli
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ redis-cli
|
||||
127.0.0.1:6379> SCAN 0 COUNT 5
|
||||
1) "49152"
|
||||
2) 1) "bull:plugins:476595"
|
||||
@ -507,14 +507,14 @@ $ grep -oE '"handle":"([[:digit:]]|\.)+/[[:digit:]]+"' cgspa
|
||||
</code></pre><ul>
|
||||
<li>We can apparently get the names of the jobs in each hash using <code>hget</code>:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">127.0.0.1:6379> TYPE bull:plugins:401827
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">127.0.0.1:6379> TYPE bull:plugins:401827
|
||||
hash
|
||||
127.0.0.1:6379> HGET bull:plugins:401827 name
|
||||
"dspace_add_missing_items"
|
||||
</code></pre><ul>
|
||||
<li>I whipped up a one liner to get the keys for all plugin jobs, convert to redis <code>HGET</code> commands to extract the value of the name field, and then sort them by their counts:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ redis-cli KEYS "bull:plugins:*" \
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ redis-cli KEYS "bull:plugins:*" \
|
||||
| sed -e 's/^bull/HGET bull/' -e 's/\([[:digit:]]\)$/\1 name/' \
|
||||
| ncat -w 3 localhost 6379 \
|
||||
| grep -v -E '^\$' | sort | uniq -c | sort -h
|
||||
@ -544,7 +544,7 @@ hash
|
||||
<ul>
|
||||
<li>Looking at the DSpace log I see there was definitely a higher number of sessions that day, perhaps twice the normal:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">$ for file in dspace.log.2021-06-[12]*; do echo "$file"; grep -oE 'session_id=[A-Z0-9]{32}' "$file" | sort | uniq | wc -l; done
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">$ for file in dspace.log.2021-06-[12]*; do echo "$file"; grep -oE 'session_id=[A-Z0-9]{32}' "$file" | sort | uniq | wc -l; done
|
||||
dspace.log.2021-06-10
|
||||
19072
|
||||
dspace.log.2021-06-11
|
||||
@ -584,7 +584,7 @@ dspace.log.2021-06-27
|
||||
</code></pre><ul>
|
||||
<li>I see 15,000 unique IPs in the XMLUI logs alone on that day:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console"># zcat /var/log/nginx/access.log.5.gz /var/log/nginx/access.log.4.gz | grep '23/Jun/2021' | awk '{print $1}' | sort | uniq | wc -l
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console"># zcat /var/log/nginx/access.log.5.gz /var/log/nginx/access.log.4.gz | grep '23/Jun/2021' | awk '{print $1}' | sort | uniq | wc -l
|
||||
15835
|
||||
</code></pre><ul>
|
||||
<li>Annoyingly I found 37,000 more hits from Bing using <code>dns:*msnbot* AND dns:*.msn.com.</code> as a Solr filter
|
||||
@ -628,7 +628,7 @@ dspace.log.2021-06-27
|
||||
</li>
|
||||
<li>The DSpace log shows:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">2021-06-30 08:19:15,874 ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper @ Cannot get a connection, pool error Timeout waiting for idle object
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">2021-06-30 08:19:15,874 ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper @ Cannot get a connection, pool error Timeout waiting for idle object
|
||||
</code></pre><ul>
|
||||
<li>The first one of these I see is from last night at 2021-06-29 at 10:47 PM</li>
|
||||
<li>I restarted Tomcat 7 and CGSpace came back up…</li>
|
||||
@ -641,12 +641,12 @@ dspace.log.2021-06-27
|
||||
</li>
|
||||
<li>Export a list of all CGSpace’s AGROVOC keywords with counts for Enrico and Elizabeth Arnaud to discuss with AGROVOC:</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">localhost/dspace63= > \COPY (SELECT DISTINCT text_value AS "dcterms.subject", count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id = 187 GROUP BY "dcterms.subject" ORDER BY count DESC) to /tmp/2021-06-30-agrovoc.csv WITH CSV HEADER;
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">localhost/dspace63= > \COPY (SELECT DISTINCT text_value AS "dcterms.subject", count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id = 187 GROUP BY "dcterms.subject" ORDER BY count DESC) to /tmp/2021-06-30-agrovoc.csv WITH CSV HEADER;
|
||||
COPY 20780
|
||||
</code></pre><ul>
|
||||
<li>Actually Enrico wanted NON AGROVOC, so I extracted all the center and CRP subjects (ignoring system office and themes):</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">localhost/dspace63= > \COPY (SELECT DISTINCT LOWER(text_value) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242) GROUP BY subject ORDER BY count DESC) to /tmp/2021-06-30-non-agrovoc.csv WITH CSV HEADER;
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">localhost/dspace63= > \COPY (SELECT DISTINCT LOWER(text_value) AS subject, count(*) FROM metadatavalue WHERE dspace_object_id in (SELECT dspace_object_id FROM item) AND metadata_field_id IN (119, 120, 127, 122, 128, 125, 135, 203, 208, 210, 215, 123, 236, 242) GROUP BY subject ORDER BY count DESC) to /tmp/2021-06-30-non-agrovoc.csv WITH CSV HEADER;
|
||||
COPY 1710
|
||||
</code></pre><ul>
|
||||
<li>Fix an issue in the Ansible infrastructure playbooks for the DSpace role
|
||||
@ -657,12 +657,12 @@ COPY 1710
|
||||
</li>
|
||||
<li>I saw a strange message in the Tomcat 7 journal on DSpace Test (linode26):</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">Jun 30 16:00:09 linode26 tomcat7[30294]: WARNING: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [111,733] milliseconds.
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">Jun 30 16:00:09 linode26 tomcat7[30294]: WARNING: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [111,733] milliseconds.
|
||||
</code></pre><ul>
|
||||
<li>What’s even crazier is that it is twice that on CGSpace (linode18)!</li>
|
||||
<li>Apparently OpenJDK defaults to using <code>/dev/random</code> (see <code>/etc/java-8-openjdk/security/java.security</code>):</li>
|
||||
</ul>
|
||||
<pre><code class="language-console" data-lang="console">securerandom.source=file:/dev/urandom
|
||||
<pre tabindex="0"><code class="language-console" data-lang="console">securerandom.source=file:/dev/urandom
|
||||
</code></pre><ul>
|
||||
<li><code>/dev/random</code> blocks and can take a long time to get entropy, and urandom on modern Linux is a cryptographically secure pseudorandom number generator
|
||||
<ul>
|
||||
|
Reference in New Issue
Block a user