mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2024-11-30 02:18:19 +01:00
1330 lines
77 KiB
HTML
1330 lines
77 KiB
HTML
<!DOCTYPE html>
|
||
<html lang="en" >
|
||
|
||
<head>
|
||
<meta charset="utf-8">
|
||
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
|
||
|
||
|
||
<meta property="og:title" content="February, 2020" />
|
||
<meta property="og:description" content="2020-02-02
|
||
|
||
Continue working on porting CGSpace’s DSpace 5 code to DSpace 6.3 that I started yesterday
|
||
|
||
Sign up for an account with MaxMind so I can get the GeoLite2-City.mmdb database
|
||
I still need to wire up the API credentials and cron job into the Ansible infrastructure playbooks
|
||
Fix some minor issues in the config and XMLUI themes, like removing Atmire stuff
|
||
The code finally builds and runs with a fresh install
|
||
|
||
|
||
" />
|
||
<meta property="og:type" content="article" />
|
||
<meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2020-02/" />
|
||
<meta property="article:published_time" content="2020-02-02T11:56:30+02:00" />
|
||
<meta property="article:modified_time" content="2022-05-05T16:50:10+03:00" />
|
||
|
||
|
||
|
||
<meta name="twitter:card" content="summary"/>
|
||
<meta name="twitter:title" content="February, 2020"/>
|
||
<meta name="twitter:description" content="2020-02-02
|
||
|
||
Continue working on porting CGSpace’s DSpace 5 code to DSpace 6.3 that I started yesterday
|
||
|
||
Sign up for an account with MaxMind so I can get the GeoLite2-City.mmdb database
|
||
I still need to wire up the API credentials and cron job into the Ansible infrastructure playbooks
|
||
Fix some minor issues in the config and XMLUI themes, like removing Atmire stuff
|
||
The code finally builds and runs with a fresh install
|
||
|
||
|
||
"/>
|
||
<meta name="generator" content="Hugo 0.100.2" />
|
||
|
||
|
||
|
||
<script type="application/ld+json">
|
||
{
|
||
"@context": "http://schema.org",
|
||
"@type": "BlogPosting",
|
||
"headline": "February, 2020",
|
||
"url": "https://alanorth.github.io/cgspace-notes/2020-02/",
|
||
"wordCount": "7239",
|
||
"datePublished": "2020-02-02T11:56:30+02:00",
|
||
"dateModified": "2022-05-05T16:50:10+03:00",
|
||
"author": {
|
||
"@type": "Person",
|
||
"name": "Alan Orth"
|
||
},
|
||
"keywords": "Notes"
|
||
}
|
||
</script>
|
||
|
||
|
||
|
||
<link rel="canonical" href="https://alanorth.github.io/cgspace-notes/2020-02/">
|
||
|
||
<title>February, 2020 | CGSpace Notes</title>
|
||
|
||
|
||
<!-- combined, minified CSS -->
|
||
|
||
<link href="https://alanorth.github.io/cgspace-notes/css/style.beb8012edc08ba10be012f079d618dc243812267efe62e11f22fe49618f976a4.css" rel="stylesheet" integrity="sha256-vrgBLtwIuhC+AS8HnWGNwkOBImfv5i4R8i/klhj5dqQ=" crossorigin="anonymous">
|
||
|
||
|
||
<!-- minified Font Awesome for SVG icons -->
|
||
|
||
<script defer src="https://alanorth.github.io/cgspace-notes/js/fontawesome.min.f5072c55a0721857184db93a50561d7dc13975b4de2e19db7f81eb5f3fa57270.js" integrity="sha256-9QcsVaByGFcYTbk6UFYdfcE5dbTeLhnbf4HrXz+lcnA=" crossorigin="anonymous"></script>
|
||
|
||
<!-- RSS 2.0 feed -->
|
||
|
||
|
||
|
||
|
||
</head>
|
||
|
||
<body>
|
||
|
||
|
||
<div class="blog-masthead">
|
||
<div class="container">
|
||
<nav class="nav blog-nav">
|
||
<a class="nav-link " href="https://alanorth.github.io/cgspace-notes/">Home</a>
|
||
</nav>
|
||
</div>
|
||
</div>
|
||
|
||
|
||
|
||
|
||
<header class="blog-header">
|
||
<div class="container">
|
||
<h1 class="blog-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/" rel="home">CGSpace Notes</a></h1>
|
||
<p class="lead blog-description" dir="auto">Documenting day-to-day work on the <a href="https://cgspace.cgiar.org">CGSpace</a> repository.</p>
|
||
</div>
|
||
</header>
|
||
|
||
|
||
|
||
|
||
<div class="container">
|
||
<div class="row">
|
||
<div class="col-sm-8 blog-main">
|
||
|
||
|
||
|
||
|
||
<article class="blog-post">
|
||
<header>
|
||
<h2 class="blog-post-title" dir="auto"><a href="https://alanorth.github.io/cgspace-notes/2020-02/">February, 2020</a></h2>
|
||
<p class="blog-post-meta">
|
||
<time datetime="2020-02-02T11:56:30+02:00">Sun Feb 02, 2020</time>
|
||
in
|
||
<span class="fas fa-folder" aria-hidden="true"></span> <a href="/cgspace-notes/categories/notes/" rel="category tag">Notes</a>
|
||
|
||
|
||
</p>
|
||
</header>
|
||
<h2 id="2020-02-02">2020-02-02</h2>
|
||
<ul>
|
||
<li>Continue working on porting CGSpace’s DSpace 5 code to DSpace 6.3 that I started yesterday
|
||
<ul>
|
||
<li>Sign up for an account with MaxMind so I can get the GeoLite2-City.mmdb database</li>
|
||
<li>I still need to wire up the API credentials and cron job into the Ansible infrastructure playbooks</li>
|
||
<li>Fix some minor issues in the config and XMLUI themes, like removing Atmire stuff</li>
|
||
<li>The code finally builds and runs with a fresh install</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<ul>
|
||
<li>Now we don’t specify the build environment because site modification are in <code>local.cfg</code>, so we just build like this:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ schedtool -D -e ionice -c2 -n7 nice -n19 mvn -U -Dmirage2.on=true -Dmirage2.deps.included=false clean package
|
||
</code></pre><ul>
|
||
<li>And it seems that we need to enable <code>pgcrypto</code> now (used for UUIDs):</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ psql -h localhost -U postgres dspace63
|
||
dspace63=# CREATE EXTENSION pgcrypto;
|
||
CREATE EXTENSION pgcrypto;
|
||
</code></pre><ul>
|
||
<li>I tried importing a PostgreSQL snapshot from CGSpace and had errors due to missing Atmire database migrations
|
||
<ul>
|
||
<li>If I try to run <code>dspace database migrate</code> I get the IDs of the migrations that are missing</li>
|
||
<li>I delete them manually in psql:</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>dspace63=# DELETE FROM schema_version WHERE version IN ('5.0.2015.01.27', '5.6.2015.12.03.2', '5.6.2016.08.08', '5.0.2017.04.28', '5.0.2017.09.25', '5.8.2015.12.03.3');
|
||
</code></pre><ul>
|
||
<li>Then I ran <code>dspace database migrate</code> and got an error:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ~/dspace63/bin/dspace database migrate
|
||
|
||
Database URL: jdbc:postgresql://localhost:5432/dspace63?ApplicationName=dspaceCli
|
||
Migrating database to latest version... (Check dspace logs for details)
|
||
Migration exception:
|
||
java.sql.SQLException: Flyway migration error occurred
|
||
at org.dspace.storage.rdbms.DatabaseUtils.updateDatabase(DatabaseUtils.java:673)
|
||
at org.dspace.storage.rdbms.DatabaseUtils.updateDatabase(DatabaseUtils.java:576)
|
||
at org.dspace.storage.rdbms.DatabaseUtils.main(DatabaseUtils.java:221)
|
||
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
|
||
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
|
||
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
|
||
at java.lang.reflect.Method.invoke(Method.java:498)
|
||
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:229)
|
||
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:81)
|
||
Caused by: org.flywaydb.core.internal.dbsupport.FlywaySqlScriptException:
|
||
Migration V6.0_2015.03.07__DS-2701_Hibernate_migration.sql failed
|
||
-----------------------------------------------------------------
|
||
SQL State : 2BP01
|
||
Error Code : 0
|
||
Message : ERROR: cannot drop table metadatavalue column resource_id because other objects depend on it
|
||
Detail: view eperson_metadata depends on table metadatavalue column resource_id
|
||
Hint: Use DROP ... CASCADE to drop the dependent objects too.
|
||
Location : org/dspace/storage/rdbms/sqlmigration/postgres/V6.0_2015.03.07__DS-2701_Hibernate_migration.sql (/home/aorth/src/git/DSpace-6.3/file:/home/aorth/dspace63/lib/dspace-api-6.3.jar!/org/dspace/storage/rdbms/sqlmigration/postgres/V6.0_2015.03.07__DS-2701_Hibernate_migration.sql)
|
||
Line : 391
|
||
Statement : ALTER TABLE metadatavalue DROP COLUMN IF EXISTS resource_id
|
||
|
||
at org.flywaydb.core.internal.dbsupport.SqlScript.execute(SqlScript.java:117)
|
||
at org.flywaydb.core.internal.resolver.sql.SqlMigrationExecutor.execute(SqlMigrationExecutor.java:71)
|
||
at org.flywaydb.core.internal.command.DbMigrate.doMigrate(DbMigrate.java:352)
|
||
at org.flywaydb.core.internal.command.DbMigrate.access$1100(DbMigrate.java:47)
|
||
at org.flywaydb.core.internal.command.DbMigrate$4.doInTransaction(DbMigrate.java:308)
|
||
at org.flywaydb.core.internal.util.jdbc.TransactionTemplate.execute(TransactionTemplate.java:72)
|
||
at org.flywaydb.core.internal.command.DbMigrate.applyMigration(DbMigrate.java:305)
|
||
at org.flywaydb.core.internal.command.DbMigrate.access$1000(DbMigrate.java:47)
|
||
at org.flywaydb.core.internal.command.DbMigrate$2.doInTransaction(DbMigrate.java:230)
|
||
at org.flywaydb.core.internal.command.DbMigrate$2.doInTransaction(DbMigrate.java:173)
|
||
at org.flywaydb.core.internal.util.jdbc.TransactionTemplate.execute(TransactionTemplate.java:72)
|
||
at org.flywaydb.core.internal.command.DbMigrate.migrate(DbMigrate.java:173)
|
||
at org.flywaydb.core.Flyway$1.execute(Flyway.java:959)
|
||
at org.flywaydb.core.Flyway$1.execute(Flyway.java:917)
|
||
at org.flywaydb.core.Flyway.execute(Flyway.java:1373)
|
||
at org.flywaydb.core.Flyway.migrate(Flyway.java:917)
|
||
at org.dspace.storage.rdbms.DatabaseUtils.updateDatabase(DatabaseUtils.java:662)
|
||
... 8 more
|
||
Caused by: org.postgresql.util.PSQLException: ERROR: cannot drop table metadatavalue column resource_id because other objects depend on it
|
||
Detail: view eperson_metadata depends on table metadatavalue column resource_id
|
||
Hint: Use DROP ... CASCADE to drop the dependent objects too.
|
||
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2422)
|
||
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2167)
|
||
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
|
||
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
|
||
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
|
||
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
|
||
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
|
||
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
|
||
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
|
||
at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
|
||
at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
|
||
at org.flywaydb.core.internal.dbsupport.JdbcTemplate.executeStatement(JdbcTemplate.java:238)
|
||
at org.flywaydb.core.internal.dbsupport.SqlScript.execute(SqlScript.java:114)
|
||
... 24 more
|
||
</code></pre><ul>
|
||
<li>I think I might need to update the sequences first… nope</li>
|
||
<li>Perhaps it’s due to some missing bitstream IDs and I need to run <code>dspace cleanup</code> on CGSpace and take a new PostgreSQL dump… nope</li>
|
||
<li>A thread on the dspace-tech mailing list regarding this migration noticed that his database had some views created that were using the <code>resource_id</code> column</li>
|
||
<li>Our database had the same issue, where the <code>eperson_metadata</code> view was created by something (Atmire module?) but has no references in the vanilla DSpace code, so I dropped it and tried the migration again:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>dspace63=# DROP VIEW eperson_metadata;
|
||
DROP VIEW
|
||
</code></pre><ul>
|
||
<li>After that the migration was successful and DSpace starts up successfully and begins indexing
|
||
<ul>
|
||
<li>xmlui, solr, jspui, rest, and oai are working (rest was redirecting to HTTPS, so I set the Tomcat connector to <code>secure="true"</code> and it fixed it on localhost, but caused other issues so I disabled it for now)</li>
|
||
<li>I started diffing our themes against the Mirage 2 reference theme to capture the latest changes</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<h2 id="2020-02-03">2020-02-03</h2>
|
||
<ul>
|
||
<li>Update DSpace mimetype fallback images from <a href="https://github.com/KDE/breeze-icons">KDE Breeze Icons</a> project
|
||
<ul>
|
||
<li>Our icons are four years old (see <a href="https://alanorth.github.io/dspace-bitstream-icons/">my bitstream icons demo</a>)</li>
|
||
</ul>
|
||
</li>
|
||
<li>Issues remaining in the DSpace 6 port of our CGSpace 5.x code:
|
||
<ul>
|
||
<li><input checked="" disabled="" type="checkbox"> Community and collection pages only show one recent submission (seems that there is only one item in Solr?)</li>
|
||
<li><input checked="" disabled="" type="checkbox"> Community and collection pages have tons of “Browse” buttons that we need to remove</li>
|
||
<li><input checked="" disabled="" type="checkbox"> Order of navigation elements in right side bar (“My Account” etc, compare to DSpace Test)</li>
|
||
<li><input disabled="" type="checkbox"> Home page trail says “CGSpace Home” instead of “CGSpace Home / Community List” (see DSpace Test)</li>
|
||
</ul>
|
||
</li>
|
||
<li>There are lots of errors in the DSpace log, which might explain some of the issues with recent submissions / Solr:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>2020-02-03 10:27:14,485 ERROR org.dspace.browse.ItemCountDAOSolr @ caught exception:
|
||
org.dspace.discovery.SearchServiceException: Invalid UUID string: 1
|
||
2020-02-03 13:20:20,475 ERROR org.dspace.app.xmlui.aspect.discovery.AbstractRecentSubmissionTransformer @ Caught SearchServiceException while retrieving recent submission for: home page
|
||
org.dspace.discovery.SearchServiceException: Invalid UUID string: 111210
|
||
</code></pre><ul>
|
||
<li>If I look in Solr’s search core I do actually see items with integers for their resource ID, which I think are all supposed to be UUIDs now…</li>
|
||
<li>I dropped all the documents in the search core:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ http --print b 'http://localhost:8080/solr/search/update?stream.body=<delete><query>*:*</query></delete>&commit=true'
|
||
</code></pre><ul>
|
||
<li>Still didn’t work, so I’m going to try a clean database import and migration:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ createdb -h localhost -U postgres -O dspacetest --encoding=UNICODE dspace63
|
||
$ psql -h localhost -U postgres -c 'alter user dspacetest superuser;'
|
||
$ pg_restore -h localhost -U postgres -d dspace63 -O --role=dspacetest -h localhost dspace_2020-01-27.backup
|
||
$ psql -h localhost -U postgres -c 'alter user dspacetest nosuperuser;'
|
||
$ psql -h localhost -U postgres dspace63
|
||
dspace63=# CREATE EXTENSION pgcrypto;
|
||
dspace63=# DELETE FROM schema_version WHERE version IN ('5.0.2015.01.27', '5.6.2015.12.03.2', '5.6.2016.08.08', '5.0.2017.04.28', '5.0.2017.09.25', '5.8.2015.12.03.3');
|
||
dspace63=# DROP VIEW eperson_metadata;
|
||
dspace63=# \q
|
||
$ psql -h localhost -U postgres -f ~/src/git/DSpace/dspace/etc/postgres/update-sequences.sql dspace63
|
||
$ ~/dspace63/bin/dspace database migrate
|
||
</code></pre><ul>
|
||
<li>I notice that the indexing doesn’t work correctly if I start it manually with <code>dspace index-discovery -b</code> (search.resourceid becomes an integer!)
|
||
<ul>
|
||
<li>If I induce an indexing by touching <code>dspace/solr/search/conf/reindex.flag</code> the search.resourceid are all UUIDs…</li>
|
||
</ul>
|
||
</li>
|
||
<li>Speaking of database stuff, there was a performance-related update for the <a href="https://github.com/DSpace/DSpace/pull/1791/">indexes that we used in DSpace 5</a>
|
||
<ul>
|
||
<li>We might want to <a href="https://github.com/DSpace/DSpace/pull/1792">apply it in DSpace 6</a>, as it was never merged to 6.x, but it helped with the performance of <code>/submissions</code> in XMLUI for us in <a href="/cgspace-notes/2018-03/">2018-03</a></li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<h2 id="2020-02-04">2020-02-04</h2>
|
||
<ul>
|
||
<li>The indexing issue I was having yesterday seems to only present itself the first time a new installation is running DSpace 6
|
||
<ul>
|
||
<li>Once the indexing induced by touching <code>dspace/solr/search/conf/reindex.flag</code> has finished, subsequent manual invocations of <code>dspace index-discovery -b</code> work as expected</li>
|
||
<li>Nevertheless, I sent a message to the dspace-tech mailing list describing the issue to see if anyone has any comments</li>
|
||
</ul>
|
||
</li>
|
||
<li>I am seeing that the number of important commits on the unreleased DSpace 6.4 are really numerous and it might be better for us to target that version
|
||
<ul>
|
||
<li>I did a simple test and it’s easy to rebase my current 6.3 branch on top of the upstream <code>dspace-6_x</code> branch:</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ git checkout -b 6_x-dev64 6_x-dev
|
||
$ git rebase -i upstream/dspace-6_x
|
||
</code></pre><ul>
|
||
<li>I finally understand why our themes show all the “Browse by” buttons on community and collection pages in DSpace 6.x
|
||
<ul>
|
||
<li>The code in <code>./dspace-xmlui/src/main/java/org/dspace/app/xmlui/aspect/browseArtifacts/CommunityBrowse.java</code> iterates over all the browse indexes and prints them when it is called</li>
|
||
<li>The XMLUI theme code in <code>dspace/modules/xmlui-mirage2/src/main/webapp/themes/0_CGIAR/xsl/preprocess/browse.xsl</code> calls the template because the id of the div matches “aspect.browseArtifacts.CommunityBrowse.list.community-browse”</li>
|
||
<li>I checked the DRI of a community page on my local 6.x and DSpace Test 5.x by appending <code>?XML</code> to the URL and I see the ID is missing on DSpace 5.x</li>
|
||
<li>The issue is the same with the ordering of the “My Account” link, but in Navigation.java</li>
|
||
<li>I tried modifying <code>preprocess/browse.xsl</code> but it always ends up printing some default list of browse by links…</li>
|
||
<li>I’m starting to wonder if Atmire’s modules somehow override this, as I don’t see how <code>CommunityBrowse.java</code> can behave like ours on DSpace 5.x unless they have overridden it (as the open source code is the same in 5.x and 6.x)</li>
|
||
<li>At least the “account” link in the sidebar is overridden in our 5.x branch because Atmire copied a modified <code>Navigation.java</code> to the local xmlui modules folder… so that explains that (and it’s easy to replicate in 6.x)</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<h2 id="2020-02-05">2020-02-05</h2>
|
||
<ul>
|
||
<li>UptimeRobot told me that AReS Explorer crashed last night, so I logged into it, ran all updates, and rebooted it</li>
|
||
<li>Testing Discovery indexing speed on my local DSpace 6.3:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ time schedtool -D -e ~/dspace63/bin/dspace index-discovery -b
|
||
schedtool -D -e ~/dspace63/bin/dspace index-discovery -b 3771.78s user 93.63s system 41% cpu 2:34:19.53 total
|
||
schedtool -D -e ~/dspace63/bin/dspace index-discovery -b 3360.28s user 82.63s system 38% cpu 2:30:22.07 total
|
||
schedtool -D -e ~/dspace63/bin/dspace index-discovery -b 4678.72s user 138.87s system 42% cpu 3:08:35.72 total
|
||
schedtool -D -e ~/dspace63/bin/dspace index-discovery -b 3334.19s user 86.54s system 35% cpu 2:41:56.73 total
|
||
</code></pre><ul>
|
||
<li>DSpace 5.8 was taking about 1 hour (or less on this laptop), so this is 2-3 times longer!</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ time schedtool -D -e ~/dspace/bin/dspace index-discovery -b
|
||
schedtool -D -e ~/dspace/bin/dspace index-discovery -b 299.53s user 69.67s system 20% cpu 30:34.47 total
|
||
schedtool -D -e ~/dspace/bin/dspace index-discovery -b 270.31s user 69.88s system 19% cpu 29:01.38 total
|
||
</code></pre><ul>
|
||
<li>Checking out the DSpace 6.x REST API query client
|
||
<ul>
|
||
<li>There is a <a href="https://terrywbrady.github.io/restReportTutorial/intro">tutorial</a> that explains how it works and I see it is very powerful because you can export a CSV of results in order to fix and re-upload them with batch import!</li>
|
||
<li>Custom queries can be added in <code>dspace-rest/src/main/webapp/static/reports/restQueryReport.js</code></li>
|
||
</ul>
|
||
</li>
|
||
<li>I noticed two new bots in the logs with the following user agents:
|
||
<ul>
|
||
<li><code>Jersey/2.6 (HttpUrlConnection 1.8.0_152)</code></li>
|
||
<li><code>magpie-crawler/1.1 (U; Linux amd64; en-GB; +http://www.brandwatch.net)</code></li>
|
||
</ul>
|
||
</li>
|
||
<li>I filed an <a href="https://github.com/atmire/COUNTER-Robots/issues/30">issue to add Jersey to the COUNTER-Robots</a> list</li>
|
||
<li>Peter noticed that the statlets on community, collection, and item pages aren’t working on CGSpace
|
||
<ul>
|
||
<li>I thought it might be related to the fact that the yearly sharding didn’t complete successfully this year so the <code>statistics-2019</code> core is empty</li>
|
||
<li>I removed the <code>statistics-2019</code> core and had to restart Tomcat like six times before all cores would load properly (ugh!!!!)</li>
|
||
<li>After that the statlets were working properly…</li>
|
||
</ul>
|
||
</li>
|
||
<li>Run all system updates on DSpace Test (linode19) and restart it</li>
|
||
</ul>
|
||
<h2 id="2020-02-06">2020-02-06</h2>
|
||
<ul>
|
||
<li>I sent a mail to the dspace-tech mailing list asking about slow Discovery indexing speed in DSpace 6</li>
|
||
<li>I destroyed my PostgreSQL 9.6 containers and re-created them using PostgreSQL 10 to see if there are any speedups with DSpace 6.x:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ podman pull postgres:10-alpine
|
||
$ podman run --name dspacedb10 -v dspacedb_data:/var/lib/postgresql/data -e POSTGRES_PASSWORD=postgres -p 5432:5432 -d postgres:10-alpine
|
||
$ createuser -h localhost -U postgres --pwprompt dspacetest
|
||
$ createdb -h localhost -U postgres -O dspacetest --encoding=UNICODE dspacetest
|
||
$ createdb -h localhost -U postgres -O dspacetest --encoding=UNICODE dspace63
|
||
$ psql -h localhost -U postgres -c 'alter user dspacetest superuser;'
|
||
$ pg_restore -h localhost -U postgres -d dspacetest -O --role=dspacetest -h localhost ~/Downloads/cgspace_2020-02-06.backup
|
||
$ pg_restore -h localhost -U postgres -d dspace63 -O --role=dspacetest -h localhost ~/Downloads/cgspace_2020-02-06.backup
|
||
$ psql -h localhost -U postgres -f ~/src/git/DSpace/dspace/etc/postgres/update-sequences.sql dspacetest
|
||
$ psql -h localhost -U postgres -f ~/src/git/DSpace/dspace/etc/postgres/update-sequences.sql dspace63
|
||
$ psql -h localhost -U postgres -c 'alter user dspacetest nosuperuser;'
|
||
$ psql -h localhost -U postgres dspace63
|
||
dspace63=# CREATE EXTENSION pgcrypto;
|
||
dspace63=# DELETE FROM schema_version WHERE version IN ('5.0.2015.01.27', '5.6.2015.12.03.2', '5.6.2016.08.08', '5.0.2017.04.28', '5.0.2017.09.25', '5.8.2015.12.03.3');
|
||
dspace63=# DROP VIEW eperson_metadata;
|
||
dspace63=# \q
|
||
</code></pre><ul>
|
||
<li>I purged ~33,000 hits from the “Jersey/2.6” bot in CGSpace’s statistics using my <code>check-spider-hits.sh</code> script:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./check-spider-hits.sh -d -p -f /tmp/jersey -s statistics -u http://localhost:8081/solr
|
||
$ for year in 2018 2017 2016 2015; do ./check-spider-hits.sh -d -p -f /tmp/jersey -s "statistics-${year}" -u http://localhost:8081/solr; done
|
||
</code></pre><ul>
|
||
<li>I noticed another user agen in the logs that we should add to the list:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>ReactorNetty/0.9.2.RELEASE
|
||
</code></pre><ul>
|
||
<li>I made <a href="https://github.com/atmire/COUNTER-Robots/issues/31">an issue on the COUNTER-Robots repository</a></li>
|
||
<li>I found a <a href="https://github.com/freedev/solr-import-export-json">nice tool for exporting and importing Solr records</a> and it seems to work for exporting our 2019 stats from the large statistics core!</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./run.sh -s http://localhost:8081/solr/statistics -a export -o /tmp/statistics-2019-01.json -f 'dateYearMonth:2019-01' -k uid
|
||
$ ls -lh /tmp/statistics-2019-01.json
|
||
-rw-rw-r-- 1 aorth aorth 3.7G Feb 6 09:26 /tmp/statistics-2019-01.json
|
||
</code></pre><ul>
|
||
<li>Then I tested importing this by creating a new core in my development environment:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ curl 'http://localhost:8080/solr/admin/cores?action=CREATE&name=statistics-2019&instanceDir=/home/aorth/dspace/solr/statistics&dataDir=/home/aorth/dspace/solr/statistics-2019/data'
|
||
$ ./run.sh -s http://localhost:8080/solr/statistics-2019 -a import -o ~/Downloads/statistics-2019-01.json -k uid
|
||
</code></pre><ul>
|
||
<li>This imports the records into the core, but DSpace can’t see them, and when I restart Tomcat the core is not seen by Solr…</li>
|
||
<li>I got the core to load by adding it to <code>dspace/solr/solr.xml</code> manually, ie:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code> <cores adminPath="/admin/cores">
|
||
...
|
||
<core name="statistics" instanceDir="statistics" />
|
||
<core name="statistics-2019" instanceDir="statistics">
|
||
<property name="dataDir" value="/home/aorth/dspace/solr/statistics-2019/data" />
|
||
</core>
|
||
...
|
||
</cores>
|
||
</code></pre><ul>
|
||
<li>But I don’t like having to do that… why doesn’t it load automatically?</li>
|
||
<li>I sent a mail to the dspace-tech mailing list to ask about it</li>
|
||
<li>Just for fun I tried to load these stats into a Solr 7.7.2 instance using the DSpace 7 solr config:</li>
|
||
<li>First, create a Solr statistics core using the DSpace 7 config:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./bin/solr create_core -c statistics -d ~/src/git/DSpace/dspace/solr/statistics/conf -p 8983
|
||
</code></pre><ul>
|
||
<li>Then try to import the stats, skipping a shitload of fields that are apparently added to our Solr statistics by Atmire modules:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./run.sh -s http://localhost:8983/solr/statistics -a import -o ~/Downloads/statistics-2019-01.json -k uid -S author_mtdt,author_mtdt_search,iso_mtdt_search,iso_mtdt,subject_mtdt,subject_mtdt_search,containerCollection,containerCommunity,containerItem,countryCode_ngram,countryCode_search,cua_version,dateYear,dateYearMonth,geoipcountrycode,ip_ngram,ip_search,isArchived,isInternal,isWithdrawn,containerBitstream,file_id,referrer_ngram,referrer_search,userAgent_ngram,userAgent_search,version_id,complete_query,complete_query_search,filterquery,ngram_query_search,ngram_simplequery_search,simple_query,simple_query_search,range,rangeDescription,rangeDescription_ngram,rangeDescription_search,range_ngram,range_search,actingGroupId,actorMemberGroupId,bitstreamCount,solr_update_time_stamp,bitstreamId
|
||
</code></pre><ul>
|
||
<li>OK that imported! I wonder if it works… maybe I’ll try another day</li>
|
||
</ul>
|
||
<h2 id="2020-02-07">2020-02-07</h2>
|
||
<ul>
|
||
<li>I did some investigation into DSpace indexing performance using flame graphs
|
||
<ul>
|
||
<li>Excellent introduction: <a href="http://www.brendangregg.com/flamegraphs.html">http://www.brendangregg.com/flamegraphs.html</a></li>
|
||
<li>Using flame graphs with java: <a href="https://netflixtechblog.com/java-in-flames-e763b3d32166">https://netflixtechblog.com/java-in-flames-e763b3d32166</a></li>
|
||
<li>Fantastic wrapper scripts for doing perf on Java processes: <a href="https://github.com/jvm-profiling-tools/perf-map-agent">https://github.com/jvm-profiling-tools/perf-map-agent</a></li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ cd ~/src/git/perf-map-agent
|
||
$ cmake .
|
||
$ make
|
||
$ ./bin/create-links-in ~/.local/bin
|
||
$ export FLAMEGRAPH_DIR=/home/aorth/src/git/FlameGraph
|
||
$ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk
|
||
$ export JAVA_OPTS="-XX:+PreserveFramePointer"
|
||
$ ~/dspace63/bin/dspace index-discovery -b &
|
||
# pid of tomcat java process
|
||
$ perf-java-flames 4478
|
||
# pid of java indexing process
|
||
$ perf-java-flames 11359
|
||
</code></pre><ul>
|
||
<li>All Java processes need to have <code>-XX:+PreserveFramePointer</code> if you want to trace their methods</li>
|
||
<li>I did the same tests against DSpace 5.8 and 6.4-SNAPSHOT’s CLI indexing process and Tomcat process
|
||
<ul>
|
||
<li>For what it’s worth, it appears all the Hibernate stuff is in the CLI processes, so we don’t need to trace the Tomcat process</li>
|
||
</ul>
|
||
</li>
|
||
<li>Here is the flame graph for DSpace 5.8’s <code>dspace index-discovery -b</code> java process:</li>
|
||
</ul>
|
||
<p><img src="/cgspace-notes/2020/02/flamegraph-java-cli-dspace58.svg" alt="DSpace 5.8 index-discovery flame graph"></p>
|
||
<ul>
|
||
<li>Here is the flame graph for DSpace 6.4-SNAPSHOT’s <code>dspace index-discovery -b</code> java process:</li>
|
||
</ul>
|
||
<p><img src="/cgspace-notes/2020/02/flamegraph-java-cli-dspace64-snapshot.svg" alt="DSpace 6.4-SNAPSHOT index-discovery flame graph"></p>
|
||
<ul>
|
||
<li>If the width of the stacks indicates time, then it’s clear that Hibernate takes longer…</li>
|
||
<li>Apparently there is a “flame diff” tool, I wonder if we can use that to compare!</li>
|
||
</ul>
|
||
<h2 id="2020-02-09">2020-02-09</h2>
|
||
<ul>
|
||
<li>This weekend I did a lot more testing of indexing performance with our DSpace 5.8 branch, vanilla DSpace 5.10, and vanilla DSpace 6.4-SNAPSHOT:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code># CGSpace 5.8
|
||
schedtool -D -e ~/dspace/bin/dspace index-discovery -b 385.72s user 131.16s system 19% cpu 43:21.18 total
|
||
schedtool -D -e ~/dspace/bin/dspace index-discovery -b 382.95s user 127.31s system 20% cpu 42:10.07 total
|
||
schedtool -D -e ~/dspace/bin/dspace index-discovery -b 368.56s user 143.97s system 20% cpu 42:22.66 total
|
||
schedtool -D -e ~/dspace/bin/dspace index-discovery -b 360.09s user 104.03s system 19% cpu 39:24.41 total
|
||
|
||
# Vanilla DSpace 5.10
|
||
schedtool -D -e ~/dspace510/bin/dspace index-discovery -b 236.19s user 59.70s system 3% cpu 2:03:31.14 total
|
||
schedtool -D -e ~/dspace510/bin/dspace index-discovery -b 232.41s user 50.38s system 3% cpu 2:04:16.00 total
|
||
|
||
# Vanilla DSpace 6.4-SNAPSHOT
|
||
schedtool -D -e ~/dspace63/bin/dspace index-discovery -b 5112.96s user 127.80s system 40% cpu 3:36:53.98 total
|
||
schedtool -D -e ~/dspace63/bin/dspace index-discovery -b 5112.96s user 127.80s system 40% cpu 3:21:0.0 total
|
||
</code></pre><ul>
|
||
<li>I generated better flame graphs for the DSpace indexing process by using <code>perf-record-stack</code> and filtering out the java process:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk
|
||
$ export PERF_RECORD_SECONDS=60
|
||
$ export JAVA_OPTS="-XX:+PreserveFramePointer"
|
||
$ time schedtool -D -e ~/dspace/bin/dspace index-discovery -b &
|
||
# process id of java indexing process (not Tomcat)
|
||
$ perf-java-record-stack 169639
|
||
$ sudo perf script -i /tmp/perf-169639.data > out.dspace510-1
|
||
$ cat out.dspace510-1 | ../FlameGraph/stackcollapse-perf.pl | grep -E '^java' | ../FlameGraph/flamegraph.pl --color=java --hash > out.dspace510-1.svg
|
||
</code></pre><ul>
|
||
<li>All data recorded on my laptop with the same kernel, same boot, etc.</li>
|
||
<li>CGSpace 5.8 (with Atmire patches):</li>
|
||
</ul>
|
||
<p><img src="/cgspace-notes/2020/02/out.dspace58-2.svg" alt="DSpace 5.8 (with Atmire modules) index-discovery flame graph"></p>
|
||
<ul>
|
||
<li>Vanilla DSpace 5.10:</li>
|
||
</ul>
|
||
<p><img src="/cgspace-notes/2020/02/out.dspace510-3.svg" alt="Vanilla DSpace 5.10 index-discovery flame graph"></p>
|
||
<ul>
|
||
<li>Vanilla DSpace 6.4-SNAPSHOT:</li>
|
||
</ul>
|
||
<p><img src="/cgspace-notes/2020/02/out.dspace64-3.svg" alt="Vanilla DSpace 6.4-SNAPSHOT index-discovery flame graph"></p>
|
||
<ul>
|
||
<li>I sent my feedback to the dspace-tech mailing list so someone can hopefully comment.</li>
|
||
<li>Last week Peter asked Sisay to upload some items to CGSpace in the GENNOVATE collection (part of Gender CRP)
|
||
<ul>
|
||
<li>He uploaded them here: <a href="https://cgspace.cgiar.org/handle/10568/105926">https://cgspace.cgiar.org/handle/10568/105926</a></li>
|
||
<li>On a whim I checked and found five duplicates there, which means Sisay didn’t even check</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<h2 id="2020-02-10">2020-02-10</h2>
|
||
<ul>
|
||
<li>Follow up with <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=706">Atmire about DSpace 6.x upgrade</a>
|
||
<ul>
|
||
<li>I raised the issue of targetting 6.4-SNAPSHOT as well as the Discovery indexing performance issues in 6.x</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<h2 id="2020-02-11">2020-02-11</h2>
|
||
<ul>
|
||
<li>Maria from Bioversity asked me to add some ORCID iDs to our controlled vocabulary so I combined them with our existing ones and updated the names from the ORCID API:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-id.xml /tmp/bioversity-orcid-ids.txt | grep -oE '[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort | uniq > /tmp/2020-02-11-combined-orcids.txt
|
||
$ ./resolve-orcids.py -i /tmp/2020-02-11-combined-orcids.txt -o /tmp/2020-02-11-combined-names.txt -d
|
||
# sort names, copy to cg-creator-id.xml, add XML formatting, and then format with tidy (preserving accents)
|
||
$ tidy -xml -utf8 -iq -m -w 0 dspace/config/controlled-vocabularies/cg-creator-id.xml
|
||
</code></pre><ul>
|
||
<li>Then I noticed some author names had changed, so I captured the old and new names in a CSV file and fixed them using <code>fix-metadata-values.py</code>:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./fix-metadata-values.py -i 2020-02-11-correct-orcid-ids.csv -db dspace -u dspace -p 'fuuu' -f cg.creator.id -t correct -m 240 -d
|
||
</code></pre><ul>
|
||
<li>On a hunch I decided to try to add these ORCID iDs to existing items that might not have them yet
|
||
<ul>
|
||
<li>I checked the database for likely matches to the author name and then created a CSV with the author names and ORCID iDs:</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>dc.contributor.author,cg.creator.id
|
||
"Staver, Charles",charles staver: 0000-0002-4532-6077
|
||
"Staver, C.",charles staver: 0000-0002-4532-6077
|
||
"Fungo, R.",Robert Fungo: 0000-0002-4264-6905
|
||
"Remans, R.",Roseline Remans: 0000-0003-3659-8529
|
||
"Remans, Roseline",Roseline Remans: 0000-0003-3659-8529
|
||
"Rietveld A.",Anne Rietveld: 0000-0002-9400-9473
|
||
"Rietveld, A.",Anne Rietveld: 0000-0002-9400-9473
|
||
"Rietveld, A.M.",Anne Rietveld: 0000-0002-9400-9473
|
||
"Rietveld, Anne M.",Anne Rietveld: 0000-0002-9400-9473
|
||
"Fongar, A.",Andrea Fongar: 0000-0003-2084-1571
|
||
"Müller, Anna",Anna Müller: 0000-0003-3120-8560
|
||
"Müller, A.",Anna Müller: 0000-0003-3120-8560
|
||
</code></pre><ul>
|
||
<li>Running the <code>add-orcid-identifiers-csv.py</code> script I added 144 ORCID iDs to items on CGSpace!</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./add-orcid-identifiers-csv.py -i /tmp/2020-02-11-add-orcid-ids.csv -db dspace -u dspace -p 'fuuu'
|
||
</code></pre><ul>
|
||
<li>Minor updates to all Python utility scripts in the CGSpace git repository</li>
|
||
<li>Update the spider agent patterns in CGSpace <code>5_x-prod</code> branch from the latest <a href="https://github.com/atmire/COUNTER-Robots">COUNTER-Robots</a> project
|
||
<ul>
|
||
<li>I ran the <code>check-spider-hits.sh</code> script with the updated file and purged 6,000 hits from our Solr statistics core on CGSpace</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<h2 id="2020-02-12">2020-02-12</h2>
|
||
<ul>
|
||
<li>Follow up with people about AReS funding for next phase</li>
|
||
<li>Peter asked about the “stats” and “summary” reports that he had requested in December
|
||
<ul>
|
||
<li>I opened a <a href="https://github.com/ilri/AReS/issues/13">new issue on AReS for the “summary” report</a></li>
|
||
</ul>
|
||
</li>
|
||
<li>Peter asked me to update John McIntire’s name format on CGSpace so I ran the following PostgreSQL query:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>dspace=# UPDATE metadatavalue SET text_value='McIntire, John M.' WHERE resource_type_id=2 AND metadata_field_id=3 AND text_value='McIntire, John';
|
||
UPDATE 26
|
||
</code></pre><h2 id="2020-02-17">2020-02-17</h2>
|
||
<ul>
|
||
<li>A few days ago Atmire responded to my question about DSpace 6.4-SNAPSHOT saying that they can only confirm that 6.3 works with their modules
|
||
<ul>
|
||
<li>I responded to say that we agree to target 6.3, but that I will cherry-pick important patches from the <code>dspace-6_x</code> branch at our own responsibility</li>
|
||
</ul>
|
||
</li>
|
||
<li>Send a message to dspace-devel asking them to tag DSpace 6.4</li>
|
||
<li>Udana from IWMI asked about the OAI base URL for their community on CGSpace
|
||
<ul>
|
||
<li>I think it should be this: <a href="https://cgspace.cgiar.org/oai/request?verb=ListRecords&metadataPrefix=oai_dc&set=com_10568_16814">https://cgspace.cgiar.org/oai/request?verb=ListRecords&metadataPrefix=oai_dc&set=com_10568_16814</a></li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<h2 id="2020-02-19">2020-02-19</h2>
|
||
<ul>
|
||
<li>I noticed a thread on the mailing list about the Tomcat header size and Solr max boolean clauses error
|
||
<ul>
|
||
<li>The solution is to do as we have done and increase the headers / boolean clauses, or to simply <a href="https://wiki.lyrasis.org/display/DSPACE/TechnicalFaq#TechnicalFAQ-I'mgetting%22SolrException:BadRequest%22followedbyalongqueryora%22tooManyClauses%22Exception">disable access rights awareness</a> in Discovery</li>
|
||
<li>I applied the fix to the <code>5_x-prod</code> branch and cherry-picked it to <code>6_x-dev</code></li>
|
||
</ul>
|
||
</li>
|
||
<li>Upgrade Tomcat from 7.0.99 to 7.0.100 in <a href="https://github.com/ilri/rmg-ansible-public">Ansible infrastructure playbooks</a></li>
|
||
<li>Upgrade PostgreSQL JDBC driver from 42.2.9 to 42.2.10 in <a href="https://github.com/ilri/rmg-ansible-public">Ansible infrastructure playbooks</a></li>
|
||
<li>Run Tomcat and PostgreSQL JDBC driver updates on DSpace Test (linode19)</li>
|
||
</ul>
|
||
<h2 id="2020-02-23">2020-02-23</h2>
|
||
<ul>
|
||
<li>I see a new spider in the nginx logs on CGSpace:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>Mozilla/5.0 (compatible;Linespider/1.1;+https://lin.ee/4dwXkTH)
|
||
</code></pre><ul>
|
||
<li>I think this should be covered by the <a href="https://github.com/atmire/COUNTER-Robots">COUNTER-Robots</a> patterns for the statistics at least…</li>
|
||
<li>I see some IP (186.32.217.255) in Costa Rica making requests like a bot with the following user agent:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36
|
||
</code></pre><ul>
|
||
<li>Another IP address (31.6.77.23) in the UK making a few hundred requests without a user agent</li>
|
||
<li>I will add the IP addresses to the nginx badbots list</li>
|
||
<li>31.6.77.23 is in the UK and judging by its DNS it belongs to a <a href="https://www.bronco.co.uk/">web marketing company called Bronco</a>
|
||
<ul>
|
||
<li>I looked for its DNS entry in Solr statistics and found a few hundred thousand over the years:</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ curl -s "http://localhost:8081/solr/statistics/select" -d "q=dns:/squeeze3.bronco.co.uk./&rows=0"
|
||
<?xml version="1.0" encoding="UTF-8"?>
|
||
<response>
|
||
<lst name="responseHeader"><int name="status">0</int><int name="QTime">4</int><lst name="params"><str name="q">dns:/squeeze3.bronco.co.uk./</str><str name="rows">0</str></lst></lst><result name="response" numFound="86044" start="0"></result>
|
||
</response>
|
||
</code></pre><ul>
|
||
<li>The totals in each core are:
|
||
<ul>
|
||
<li>statistics: 86044</li>
|
||
<li>statistics-2018: 65144</li>
|
||
<li>statistics-2017: 79405</li>
|
||
<li>statistics-2016: 121316</li>
|
||
<li>statistics-2015: 30720</li>
|
||
<li>statistics-2014: 4524</li>
|
||
<li>… so about 387,000 hits!</li>
|
||
</ul>
|
||
</li>
|
||
<li>I will purge them from each core one by one, ie:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ curl -s "http://localhost:8081/solr/statistics-2015/update?softCommit=true" -H "Content-Type: text/xml" --data-binary "<delete><query>dns:squeeze3.bronco.co.uk.</query></delete>"
|
||
$ curl -s "http://localhost:8081/solr/statistics-2014/update?softCommit=true" -H "Content-Type: text/xml" --data-binary "<delete><query>dns:squeeze3.bronco.co.uk.</query></delete>"
|
||
</code></pre><ul>
|
||
<li>Deploy latest Tomcat and PostgreSQL JDBC driver changes on CGSpace (linode18)</li>
|
||
<li>Deploy latest <code>5_x-prod</code> branch on CGSpace (linode18)</li>
|
||
<li>Run all system updates on CGSpace (linode18) server and reboot it
|
||
<ul>
|
||
<li>After the server came back up Tomcat started, but there were errors loading some Solr statistics cores</li>
|
||
<li>Luckily after restarting Tomcat once more they all came back up</li>
|
||
</ul>
|
||
</li>
|
||
<li>I ran the <code>dspace cleanup -v</code> process on CGSpace and got an error:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>Error: ERROR: update or delete on table "bitstream" violates foreign key constraint "bundle_primary_bitstream_id_fkey" on table "bundle"
|
||
Detail: Key (bitstream_id)=(183996) is still referenced from table "bundle".
|
||
</code></pre><ul>
|
||
<li>The solution is, as always:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code># su - postgres
|
||
$ psql dspace -c 'update bundle set primary_bitstream_id=NULL where primary_bitstream_id in (183996);'
|
||
UPDATE 1
|
||
</code></pre><ul>
|
||
<li>Аdd one more new Bioversity ORCID iD to the controlled vocabulary on CGSpace</li>
|
||
<li>Felix Shaw from Earlham emailed me to ask about his admin account on DSpace Test
|
||
<ul>
|
||
<li>His old one got lost when I re-sync’d DSpace Test with CGSpace a few weeks ago</li>
|
||
<li>I added a new account for him and added it to the Administrators group:</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ dspace user -a -m wow@me.com -g Felix -s Shaw -p 'fuananaaa'
|
||
</code></pre><ul>
|
||
<li>For some reason the Atmire Content and Usage Analysis (CUA) module’s Usage Statistics is drawing blank graphs
|
||
<ul>
|
||
<li>I looked in the dspace.log and see:</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>2020-02-23 11:28:13,696 ERROR org.dspace.app.xmlui.cocoon.DSpaceCocoonServletFilter @ Serious Error Occurred Processing Request!
|
||
org.springframework.web.util.NestedServletException: Handler processing failed; nested exception is java.lang.NoClassDefFoundError: Could not
|
||
initialize class org.jfree.chart.JFreeChart
|
||
</code></pre><ul>
|
||
<li>The same error happens on DSpace Test, but graphs are working on my local instance
|
||
<ul>
|
||
<li>The only thing I’ve changed recently is the Tomcat version, but it’s working locally…</li>
|
||
<li>I see the following file on my local instance, CGSpace, and DSpace Test: <code>dspace/webapps/xmlui/WEB-INF/lib/jfreechart-1.0.5.jar</code></li>
|
||
<li>I deployed Tomcat 7.0.99 on DSpace Test but the JFreeChart classs still can’t be found…</li>
|
||
<li>So it must be somthing with the library search path…</li>
|
||
<li>Strange it works with Tomcat 7.0.100 on my local machine</li>
|
||
</ul>
|
||
</li>
|
||
<li>I copied the <code>jfreechart-1.0.5.jar</code> file to the Tomcat lib folder and then there was a different error when I loaded Atmire CUA:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>2020-02-23 16:25:10,841 ERROR org.dspace.app.xmlui.cocoon.DSpaceCocoonServletFilter @ Serious Error Occurred Processing Request! org.springframework.web.util.NestedServletException: Handler processing failed; nested exception is java.awt.AWTError: Assistive Technology not found: org.GNOME.Accessibility.AtkWrapper
|
||
</code></pre><ul>
|
||
<li>Some search results suggested commenting out the following line in <code>/etc/java-8-openjdk/accessibility.properties</code>:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>assistive_technologies=org.GNOME.Accessibility.AtkWrapper
|
||
</code></pre><ul>
|
||
<li>And removing the extra jfreechart library and restarting Tomcat I was able to load the usage statistics graph on DSpace Test…
|
||
<ul>
|
||
<li>Hmm, actually I think this is an Java bug, perhaps introduced or at <a href="https://bugs.openjdk.java.net/browse/JDK-8204862">least present in 18.04</a>, with lots of <a href="https://code-maven.com/slides/jenkins-intro/no-graph-error">references</a> to it <a href="https://issues.jenkins-ci.org/browse/JENKINS-39636">happening in other</a> configurations like Debian 9 with Jenkins, etc…</li>
|
||
<li>Apparently if you use the <em>non-headless</em> version of openjdk this doesn’t happen… but that pulls in X11 stuff so no thanks</li>
|
||
<li>Also, I see dozens of occurences of this going back over one month (we have logs for about that period):</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<pre tabindex="0"><code># grep -c 'initialize class org.jfree.chart.JFreeChart' dspace.log.2020-0*
|
||
dspace.log.2020-01-12:4
|
||
dspace.log.2020-01-13:66
|
||
dspace.log.2020-01-14:4
|
||
dspace.log.2020-01-15:36
|
||
dspace.log.2020-01-16:88
|
||
dspace.log.2020-01-17:4
|
||
dspace.log.2020-01-18:4
|
||
dspace.log.2020-01-19:4
|
||
dspace.log.2020-01-20:4
|
||
dspace.log.2020-01-21:4
|
||
...
|
||
</code></pre><ul>
|
||
<li>I deployed the fix on CGSpace (linode18) and I was able to see the graphs in the Atmire CUA Usage Statistics…</li>
|
||
<li>On an unrelated note there is something weird going on in that I see millions of hits from IP 34.218.226.147 in Solr statistics, but if I remember correctly that IP belongs to CodeObia’s AReS explorer, but it should only be using REST and therefore no Solr statistics…?</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ curl -s "http://localhost:8081/solr/statistics-2018/select" -d "q=ip:34.218.226.147&rows=0"
|
||
<?xml version="1.0" encoding="UTF-8"?>
|
||
<response>
|
||
<lst name="responseHeader"><int name="status">0</int><int name="QTime">811</int><lst name="params"><str name="q">ip:34.218.226.147</str><str name="rows">0</str></lst></lst><result name="response" numFound="5536097" start="0"></result>
|
||
</response>
|
||
</code></pre><ul>
|
||
<li>And there are apparently two million from last month (2020-01):</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ curl -s "http://localhost:8081/solr/statistics/select" -d "q=ip:34.218.226.147&fq=dateYearMonth:2020-01&rows=0"
|
||
<?xml version="1.0" encoding="UTF-8"?>
|
||
<response>
|
||
<lst name="responseHeader"><int name="status">0</int><int name="QTime">248</int><lst name="params"><str name="q">ip:34.218.226.147</str><str name="fq">dateYearMonth:2020-01</str><str name="rows">0</str></lst></lst><result name="response" numFound="2173455" start="0"></result>
|
||
</response>
|
||
</code></pre><ul>
|
||
<li>But when I look at the nginx access logs for the past month or so I only see 84,000, all of which are on <code>/rest</code> and none of which are to XMLUI:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code># zcat /var/log/nginx/*.log.*.gz | grep -c 34.218.226.147
|
||
84322
|
||
# zcat /var/log/nginx/*.log.*.gz | grep 34.218.226.147 | grep -c '/rest'
|
||
84322
|
||
</code></pre><ul>
|
||
<li>Either the requests didn’t get logged, or there is some mixup with the Solr documents (fuck!)
|
||
<ul>
|
||
<li>On second inspection, I <em>do</em> see lots of notes here about 34.218.226.147, including 150,000 on one day in October, 2018 alone…</li>
|
||
</ul>
|
||
</li>
|
||
<li>To make matters worse, I see hits from REST in the regular nginx access log!
|
||
<ul>
|
||
<li>I did a few tests and I can’t figure out, but it seems that hits appear in either (not both)</li>
|
||
<li>Also, I see <em>zero</em> hits to <code>/rest</code> in the access.log on DSpace Test (linode19)</li>
|
||
</ul>
|
||
</li>
|
||
<li>Anyways, I faceted by IP in 2020-01 and see:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ curl -s 'http://localhost:8081/solr/statistics/select?q=*:*&fq=dateYearMonth:2020-01&rows=0&wt=json&indent=true&facet=true&facet.field=ip'
|
||
...
|
||
"172.104.229.92",2686876,
|
||
"34.218.226.147",2173455,
|
||
"163.172.70.248",80945,
|
||
"163.172.71.24",55211,
|
||
"163.172.68.99",38427,
|
||
</code></pre><ul>
|
||
<li>Surprise surprise, the top two IPs are from AReS servers… wtf.</li>
|
||
<li>The next three are from Online in France and they are all using this weird user agent and making tens of thousands of requests to Discovery:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>Mozilla/5.0 ((Windows; U; Windows NT 6.1; fr; rv:1.9.2) Gecko/20100115 Firefox/3.6)
|
||
</code></pre><ul>
|
||
<li>And all the same three are already inflating the statistics for 2020-02… hmmm.</li>
|
||
<li>I need to see why AReS harvesting is inflating the stats, as it should only be making REST requests…</li>
|
||
<li>Shiiiiit, I see 84,000 requests from the AReS IP today alone:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ curl -s 'http://localhost:8081/solr/statistics/select?q=time:2020-02-22*+AND+ip:172.104.229.92&rows=0&wt=json&indent=true'
|
||
...
|
||
"response":{"numFound":84594,"start":0,"docs":[]
|
||
</code></pre><ul>
|
||
<li>Fuck! And of course the ILRI websites doing their daily REST harvesting are causing issues too, from today alone:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code> "2a01:7e00::f03c:91ff:fe9a:3a37",35512,
|
||
"2a01:7e00::f03c:91ff:fe18:7396",26155,
|
||
</code></pre><ul>
|
||
<li>I need to try to make some requests for these URLs and observe if they make a statistics hit:
|
||
<ul>
|
||
<li><code>/rest/items?expand=metadata,bitstreams,parentCommunityList&limit=50&offset=82450</code></li>
|
||
<li><code>/rest/handle/10568/28702?expand=all</code></li>
|
||
</ul>
|
||
</li>
|
||
<li>Those are the requests AReS and ILRI servers are making… nearly 150,000 per day!</li>
|
||
<li>Well that settles it!</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ curl -s 'http://localhost:8081/solr/statistics/select?q=time:2020-02-23*+AND+statistics_type:view&fq=ip:78.128.99.24&rows=10&wt=json&indent=true' | grep numFound
|
||
"response":{"numFound":12,"start":0,"docs":[
|
||
$ curl -s 'https://dspacetest.cgiar.org/rest/items?expand=metadata,bitstreams,parentCommunityList&limit=50&offset=82450'
|
||
$ curl -s 'http://localhost:8081/solr/statistics/update?softCommit=true'
|
||
$ curl -s 'http://localhost:8081/solr/statistics/select?q=time:2020-02-23*+AND+statistics_type:view&fq=ip:78.128.99.24&rows=10&wt=json&indent=true' | grep numFound
|
||
"response":{"numFound":62,"start":0,"docs":[
|
||
</code></pre><ul>
|
||
<li>A REST request with <code>limit=50</code> will make exactly fifty <code>statistics_type=view</code> statistics in the Solr core… fuck.
|
||
<ul>
|
||
<li>So not only do I need to purge all these millions of hits, we need to add these IPs to the list of spider IPs so they don’t get recorded</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<h2 id="2020-02-24">2020-02-24</h2>
|
||
<ul>
|
||
<li>I tried to add some IPs to the DSpace spider list so they would not get recorded in Solr statistics, but it doesn’t support IPv6
|
||
<ul>
|
||
<li>A better method is actually to just use the nginx mapping logic we already have to reset the user agent for these requests to “bot”</li>
|
||
<li>That, or to really insist that users harvesting us specify some kind of user agent</li>
|
||
</ul>
|
||
</li>
|
||
<li>I tried to add the IPs to our nginx IP bot mapping but it doesn’t seem to work… WTF, why is everything broken?!</li>
|
||
<li>Oh lord have mercy, the two AReS harvester IPs alone are responsible for 42 MILLION hits in 2019 and 2020 so far by themselves:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ http 'http://localhost:8081/solr/statistics/select?q=ip:34.218.226.147+OR+ip:172.104.229.92&rows=0&wt=json&indent=true' | grep numFound
|
||
"response":{"numFound":42395486,"start":0,"docs":[]
|
||
</code></pre><ul>
|
||
<li>I modified my <code>check-spider-hits.sh</code> script to create a version that works with IPs and purged 47 million stats from Solr on CGSpace:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f 2020-02-24-bot-ips.txt -s statistics -p
|
||
Purging 22809216 hits from 34.218.226.147 in statistics
|
||
Purging 19586270 hits from 172.104.229.92 in statistics
|
||
Purging 111137 hits from 2a01:7e00::f03c:91ff:fe9a:3a37 in statistics
|
||
Purging 271668 hits from 2a01:7e00::f03c:91ff:fe18:7396 in statistics
|
||
|
||
Total number of bot hits purged: 42778291
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f 2020-02-24-bot-ips.txt -s statistics-2018 -p
|
||
Purging 5535399 hits from 34.218.226.147 in statistics-2018
|
||
|
||
Total number of bot hits purged: 5535399
|
||
</code></pre><ul>
|
||
<li>(The <code>statistics</code> core holds 2019 and 2020 stats, because the yearly sharding process failed this year)</li>
|
||
<li>Attached is a before and after of the period from 2019-01 to 2020-02:</li>
|
||
</ul>
|
||
<p><img src="/cgspace-notes/2020/02/cgspace-stats-before.png" alt="CGSpace stats for 2019 and 2020 before the purge"></p>
|
||
<p><img src="/cgspace-notes/2020/02/cgspace-stats-after.png" alt="CGSpace stats for 2019 and 2020 after the purge"></p>
|
||
<ul>
|
||
<li>And here is a graph of the stats by year since 2011:</li>
|
||
</ul>
|
||
<p><img src="/cgspace-notes/2020/02/cgspace-stats-years.png" alt="CGSpace stats by year since 2011 after the purge"></p>
|
||
<ul>
|
||
<li>I’m a little suspicious of the 2012, 2013, and 2014 numbers, though
|
||
<ul>
|
||
<li>I should facet those years by IP and see if any stand out…</li>
|
||
</ul>
|
||
</li>
|
||
<li>The next thing I need to do is figure out why the nginx IP to bot mapping isn’t working…
|
||
<ul>
|
||
<li>Actually, and I’ve probably learned this before, but the bot mapping is working, but nginx only logs the real user agent (of course!), as I’m only using the mapped one in the proxy pass…</li>
|
||
<li>This trick for adding a header with the mapped “ua” variable is nice:</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>add_header X-debug-message "ua is $ua" always;
|
||
</code></pre><ul>
|
||
<li>Then in the HTTP response you see:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>X-debug-message: ua is bot
|
||
</code></pre><ul>
|
||
<li>So the IP to bot mapping is working, phew.</li>
|
||
<li>More bad news, I checked the remaining IPs in our existing bot IP mapping, and there are statistics registered for them!
|
||
<ul>
|
||
<li>For example, ciat.cgiar.org was previously 104.196.152.243, but it is now 35.237.175.180, which I had noticed as a “mystery” client on Google Cloud in 2018-09</li>
|
||
<li>Others I should probably add to the nginx bot map list are:
|
||
<ul>
|
||
<li>wle.cgiar.org (70.32.90.172)</li>
|
||
<li>ccafs.cgiar.org (205.186.128.185)</li>
|
||
<li>another CIAT scraper using the PHP GuzzleHttp library (45.5.184.72)</li>
|
||
<li>macaronilab.com (<a href="https://viewdns.info/reverseip/?host=63.32.242.35&t=1">63.32.242.35</a>)</li>
|
||
<li>africa-rising.net (<a href="https://viewdns.info/reverseip/?host=162.243.171.159&t=1">162.243.171.159</a></li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
</li>
|
||
<li>These IPs are all active in the REST API logs over the last few months and they account for <em>thirty-four million</em> more hits in the statistics!</li>
|
||
<li>I purged them from CGSpace:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics -p
|
||
Purging 15 hits from 104.196.152.243 in statistics
|
||
Purging 61064 hits from 35.237.175.180 in statistics
|
||
Purging 1378 hits from 70.32.90.172 in statistics
|
||
Purging 28880 hits from 205.186.128.185 in statistics
|
||
Purging 464613 hits from 63.32.242.35 in statistics
|
||
Purging 131 hits from 162.243.171.159 in statistics
|
||
|
||
Total number of bot hits purged: 556081
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics-2018 -p
|
||
Purging 684888 hits from 104.196.152.243 in statistics-2018
|
||
Purging 323737 hits from 35.227.26.162 in statistics-2018
|
||
Purging 221091 hits from 35.237.175.180 in statistics-2018
|
||
Purging 3834 hits from 205.186.128.185 in statistics-2018
|
||
Purging 20337 hits from 63.32.242.35 in statistics-2018
|
||
|
||
Total number of bot hits purged: 1253887
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics-2017 -p
|
||
Purging 1752548 hits from 104.196.152.243 in statistics-2017
|
||
|
||
Total number of bot hits purged: 1752548
|
||
</code></pre><ul>
|
||
<li>I looked in the REST API logs for the past month and found a few more IPs:
|
||
<ul>
|
||
<li>95.110.154.135 (BioversityBot)</li>
|
||
<li>34.209.213.122 (IITA? bot)</li>
|
||
</ul>
|
||
</li>
|
||
<li>The client at 3.225.28.105 is using the following user agent:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>Apache-HttpClient/4.3.4 (java 1.5)
|
||
</code></pre><ul>
|
||
<li>But I don’t see any hits for it in the statistics core for some reason</li>
|
||
<li>Looking more into the 2015 statistics I see some questionable IPs:
|
||
<ul>
|
||
<li>50.115.121.196 has a DNS of saltlakecity2tr.monitis.com</li>
|
||
<li>70.32.99.142 has userAgent Drupal</li>
|
||
<li>104.130.164.111 was some scraper on Rackspace.com that made ~30,000 requests per month</li>
|
||
<li>45.56.65.158 was some scraper on Linode that made ~30,000 requests per month</li>
|
||
<li>23.97.198.40 was some scraper with an IP owned by Microsoft that made ~4,000 requests per month and had no user agent</li>
|
||
<li>180.76.15.6 and <em>dozens</em> of other IPs with DNS like baiduspider-180-76-15-6.crawl.baidu.com. (and they were using a Mozilla/5.0 user agent!)</li>
|
||
</ul>
|
||
</li>
|
||
<li>For the IPs I purged them using <code>check-spider-ip-hits.sh</code>:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics -p
|
||
Purging 11478 hits from 95.110.154.135 in statistics
|
||
Purging 1208 hits from 34.209.213.122 in statistics
|
||
Purging 10 hits from 54.184.39.242 in statistics
|
||
|
||
Total number of bot hits purged: 12696
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics-2018 -p
|
||
Purging 12572 hits from 95.110.154.135 in statistics-2018
|
||
Purging 233 hits from 34.209.213.122 in statistics-2018
|
||
|
||
Total number of bot hits purged: 12805
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics-2017 -p
|
||
Purging 37503 hits from 95.110.154.135 in statistics-2017
|
||
Purging 25 hits from 34.209.213.122 in statistics-2017
|
||
Purging 8621 hits from 23.97.198.40 in statistics-2017
|
||
|
||
Total number of bot hits purged: 46149
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics-2016 -p
|
||
Purging 1476 hits from 95.110.154.135 in statistics-2016
|
||
Purging 10490 hits from 70.32.99.142 in statistics-2016
|
||
Purging 29519 hits from 50.115.121.196 in statistics-2016
|
||
Purging 175758 hits from 45.56.65.158 in statistics-2016
|
||
Purging 26279 hits from 23.97.198.40 in statistics-2016
|
||
|
||
Total number of bot hits purged: 243522
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics-2015 -p
|
||
Purging 49351 hits from 70.32.99.142 in statistics-2015
|
||
Purging 30278 hits from 50.115.121.196 in statistics-2015
|
||
Purging 172292 hits from 104.130.164.111 in statistics-2015
|
||
Purging 78571 hits from 45.56.65.158 in statistics-2015
|
||
Purging 16069 hits from 23.97.198.40 in statistics-2015
|
||
|
||
Total number of bot hits purged: 346561
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics-2014 -p
|
||
Purging 462 hits from 70.32.99.142 in statistics-2014
|
||
Purging 1766 hits from 50.115.121.196 in statistics-2014
|
||
|
||
Total number of bot hits purged: 2228
|
||
</code></pre><ul>
|
||
<li>Then I purged about 200,000 Baidu hits from the 2015 to 2019 statistics cores with a few manual delete queries because they didn’t have a proper user agent and the only way to identify them was via DNS:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ curl -s "http://localhost:8081/solr/statistics-2016/update?softCommit=true" -H "Content-Type: text/xml" --data-binary "<delete><query>dns:*crawl.baidu.com.</query></delete>"
|
||
</code></pre><ul>
|
||
<li>Jesus, the more I keep looking, the more I see ridiculous stuff…</li>
|
||
<li>In 2019 there were a few hundred thousand requests from CodeObia on Orange Jordan network…
|
||
<ul>
|
||
<li>79.173.222.114</li>
|
||
<li>149.200.141.57</li>
|
||
<li>86.108.89.91</li>
|
||
<li>And others…</li>
|
||
</ul>
|
||
</li>
|
||
<li>Also I see a CIAT IP 45.5.186.2 that was making hundreds of thousands of requests (and 100/sec at one point in 2019)</li>
|
||
<li>Also I see some IP on Hetzner making 10,000 requests per month: 2a01:4f8:210:51ef::2</li>
|
||
<li>Also I see some IP in Greece making 130,000 requests with weird user agents: 143.233.242.130</li>
|
||
<li>I purged a bunch more from all cores:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics -p
|
||
Purging 109965 hits from 45.5.186.2 in statistics
|
||
Purging 78648 hits from 79.173.222.114 in statistics
|
||
Purging 49032 hits from 149.200.141.57 in statistics
|
||
Purging 26897 hits from 86.108.89.91 in statistics
|
||
Purging 80898 hits from 2a01:4f8:210:51ef::2 in statistics
|
||
Purging 130831 hits from 143.233.242.130 in statistics
|
||
Purging 46489 hits from 83.103.94.48 in statistics
|
||
|
||
Total number of bot hits purged: 522760
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics-2018 -p
|
||
Purging 41574 hits from 45.5.186.2 in statistics-2018
|
||
Purging 39620 hits from 2a01:4f8:210:51ef::2 in statistics-2018
|
||
Purging 19325 hits from 83.103.94.48 in statistics-2018
|
||
|
||
Total number of bot hits purged: 100519
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics-2017
|
||
Found 296 hits from 45.5.186.2 in statistics-2017
|
||
Found 390 hits from 2a01:4f8:210:51ef::2 in statistics-2017
|
||
Found 16086 hits from 83.103.94.48 in statistics-2017
|
||
|
||
Total number of hits from bots: 16772
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics-2017 -p
|
||
Purging 296 hits from 45.5.186.2 in statistics-2017
|
||
Purging 390 hits from 2a01:4f8:210:51ef::2 in statistics-2017
|
||
Purging 16086 hits from 83.103.94.48 in statistics-2017
|
||
|
||
Total number of bot hits purged: 16772
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics-2016 -p
|
||
Purging 394 hits from 2a01:4f8:210:51ef::2 in statistics-2016
|
||
Purging 26519 hits from 83.103.94.48 in statistics-2016
|
||
|
||
Total number of bot hits purged: 26913
|
||
$ ./check-spider-ip-hits.sh -u http://localhost:8081/solr -f /tmp/ips.txt -s statistics-2015 -p
|
||
Purging 1 hits from 143.233.242.130 in statistics-2015
|
||
Purging 14109 hits from 83.103.94.48 in statistics-2015
|
||
|
||
Total number of bot hits purged: 14110
|
||
</code></pre><ul>
|
||
<li>Though looking in my REST logs for the last month I am second guessing my judgement on 45.5.186.2 because I see user agents like “Microsoft Office Word 2014”</li>
|
||
<li>Actually no, the overwhelming majority of these are coming from something harvesting the REST API with no user agent:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code># zgrep 45.5.186.2 /var/log/nginx/rest.log.[1234]* | awk -F\" '{print $6}' | sort | uniq -c | sort -h
|
||
1 Microsoft Office Word 2014
|
||
1 Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 10.0; Win64; x64; Trident/7.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729; ms-office)
|
||
1 Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 10.0; WOW64; Trident/7.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729)
|
||
1 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36
|
||
2 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36
|
||
3 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36
|
||
24 GuzzleHttp/6.3.3 curl/7.59.0 PHP/7.0.31
|
||
34 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36
|
||
98 Apache-HttpClient/4.3.4 (java 1.5)
|
||
54850 -
|
||
</code></pre><ul>
|
||
<li>I see lots of requests coming from the following user agents:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>"Apache-HttpClient/4.5.7 (Java/11.0.3)"
|
||
"Apache-HttpClient/4.5.7 (Java/11.0.2)"
|
||
"LinkedInBot/1.0 (compatible; Mozilla/5.0; Jakarta Commons-HttpClient/4.3 +http://www.linkedin.com)"
|
||
"EventMachine HttpClient"
|
||
</code></pre><ul>
|
||
<li>I should definitely add HttpClient to the bot user agents…</li>
|
||
<li>Also, while <code>bot</code>, <code>spider</code>, and <code>crawl</code> are in the pattern list already and can be used for case-insensitive matching when used by DSpace in Java, I can’t do case-insensitive matching in Solr with <code>check-spider-hits.sh</code>
|
||
<ul>
|
||
<li>I need to add <code>Bot</code>, <code>Spider</code>, and <code>Crawl</code> to my local user agent file to purge them</li>
|
||
<li>Also, I see lots of hits from “Indy Library”, which we’ve been blocking for a long time, but somehow these got through (I think it’s the Greek guys using Delphi)</li>
|
||
<li>Somehow my regex conversion isn’t working in check-spider-hits.sh, but “<em>Indy</em>” will work for now</li>
|
||
<li>Purging just these case-sensitive patterns removed ~1 million more hits from 2011 to 2020</li>
|
||
</ul>
|
||
</li>
|
||
<li>More weird user agents in 2019:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>ecolink (+https://search.ecointernet.org/)
|
||
ecoweb (+https://search.ecointernet.org/)
|
||
EcoInternet http://www.ecointernet.org/
|
||
EcoInternet http://ecointernet.org/
|
||
</code></pre><h2 id="2020-02-25">2020-02-25</h2>
|
||
<ul>
|
||
<li>And what’s the 950,000 hits from Online.net IPs with the following user agent:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>Mozilla/5.0 ((Windows; U; Windows NT 6.1; fr; rv:1.9.2) Gecko/20100115 Firefox/3.6)
|
||
</code></pre><ul>
|
||
<li>Over half of the requests were to Discover and Browse pages, and the rest were to actual item pages, but they were within seconds of each other, so I’m purging them all</li>
|
||
<li>I looked deeper in the Solr statistics and found a bunch more weird user agents:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>LinkedInBot/1.0 (compatible; Mozilla/5.0; Jakarta Commons-HttpClient/4.3
|
||
EventMachine HttpClient
|
||
ecolink (+https://search.ecointernet.org/)
|
||
ecoweb (+https://search.ecointernet.org/)
|
||
EcoInternet http://www.ecointernet.org/
|
||
EcoInternet http://ecointernet.org/
|
||
Biosphere EcoSearch http://search.ecointernet.org/
|
||
Typhoeus - https://github.com/typhoeus/typhoeus
|
||
Citoid (Wikimedia tool; learn more at https://www.mediawiki.org/wiki/Citoid)
|
||
node-fetch/1.0 (+https://github.com/bitinn/node-fetch)
|
||
7Siters/1.08 (+https://7ooo.ru/siters/)
|
||
sqlmap/1.0-dev-nongit-20190527 (http://sqlmap.org)
|
||
sqlmap/1.3.4.14#dev (http://sqlmap.org)
|
||
lua-resty-http/0.10 (Lua) ngx_lua/10000
|
||
omgili/0.5 +http://omgili.com
|
||
IZaBEE/IZaBEE-1.01 (Buzzing Abound The Web; https://izabee.com; info at izabee dot com)
|
||
Twurly v1.1 (https://twurly.org)
|
||
okhttp/3.11.0
|
||
okhttp/3.10.0
|
||
Pattern/2.6 +http://www.clips.ua.ac.be/pattern
|
||
Link Check; EPrints 3.3.x;
|
||
CyotekWebCopy/1.7 CyotekHTTP/2.0
|
||
Adestra Link Checker: http://www.adestra.co.uk
|
||
HTTPie/1.0.2
|
||
</code></pre><ul>
|
||
<li>I notice that some of these would be matched by the COUNTER-Robots list when DSpace uses it in Java because there we have more robust (and case-insensitive) matching
|
||
<ul>
|
||
<li>I created a temporary file of some of the patterns and converted them to use capitalization so I could run them through <code>check-spider-hits.sh</code></li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>Link.?Check
|
||
Http.?Client
|
||
ecointernet
|
||
</code></pre><ul>
|
||
<li>That removes another 500,000 or so:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./check-spider-hits.sh -u http://localhost:8081/solr -f /tmp/agents -s statistics -p
|
||
Purging 253 hits from Jersey\/[0-9] in statistics
|
||
Purging 7302 hits from Link.?Check in statistics
|
||
Purging 85574 hits from Http.?Client in statistics
|
||
Purging 495 hits from HTTPie\/[0-9] in statistics
|
||
Purging 56726 hits from ecointernet in statistics
|
||
|
||
Total number of bot hits purged: 150350
|
||
$ ./check-spider-hits.sh -u http://localhost:8081/solr -f /tmp/agents -s statistics-2018 -p
|
||
Purging 3442 hits from Link.?Check in statistics-2018
|
||
Purging 21922 hits from Http.?Client in statistics-2018
|
||
Purging 2120 hits from HTTPie\/[0-9] in statistics-2018
|
||
Purging 10 hits from ecointernet in statistics-2018
|
||
|
||
Total number of bot hits purged: 27494
|
||
$ ./check-spider-hits.sh -u http://localhost:8081/solr -f /tmp/agents -s statistics-2017 -p
|
||
Purging 6416 hits from Link.?Check in statistics-2017
|
||
Purging 403402 hits from Http.?Client in statistics-2017
|
||
Purging 12 hits from HTTPie\/[0-9] in statistics-2017
|
||
Purging 6 hits from ecointernet in statistics-2017
|
||
|
||
Total number of bot hits purged: 409836
|
||
$ ./check-spider-hits.sh -u http://localhost:8081/solr -f /tmp/agents -s statistics-2016 -p
|
||
Purging 2348 hits from Link.?Check in statistics-2016
|
||
Purging 225664 hits from Http.?Client in statistics-2016
|
||
Purging 15 hits from HTTPie\/[0-9] in statistics-2016
|
||
|
||
Total number of bot hits purged: 228027
|
||
$ ./check-spider-hits.sh -u http://localhost:8081/solr -f /tmp/agents -s statistics-2015 -p
|
||
Purging 3459 hits from Link.?Check in statistics-2015
|
||
Purging 263 hits from Http.?Client in statistics-2015
|
||
Purging 15 hits from HTTPie\/[0-9] in statistics-2015
|
||
|
||
Total number of bot hits purged: 3737
|
||
$ ./check-spider-hits.sh -u http://localhost:8081/solr -f /tmp/agents -s statistics-2014 -p
|
||
Purging 5 hits from Link.?Check in statistics-2014
|
||
Purging 8 hits from Http.?Client in statistics-2014
|
||
Purging 4 hits from HTTPie\/[0-9] in statistics-2014
|
||
|
||
Total number of bot hits purged: 17
|
||
$ ./check-spider-hits.sh -u http://localhost:8081/solr -f /tmp/agents -s statistics-2011 -p
|
||
Purging 159 hits from Http.?Client in statistics-2011
|
||
|
||
Total number of bot hits purged: 159
|
||
</code></pre><ul>
|
||
<li>Make pull requests for issues with user agents in the COUNTER-Robots repository:
|
||
<ul>
|
||
<li><a href="https://github.com/atmire/COUNTER-Robots/pull/33">Fix okhttp</a></li>
|
||
<li><a href="https://github.com/atmire/COUNTER-Robots/pull/34">Add new bots</a></li>
|
||
</ul>
|
||
</li>
|
||
<li>One benefit of all this is that the size of the statistics Solr core has reduced by 6GiB since yesterday, though I can’t remember how big it was before that
|
||
<ul>
|
||
<li>According to my notes it was 43GiB in January when it failed the first time</li>
|
||
<li>I wonder if the sharding process would work now…</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<h2 id="2020-02-26">2020-02-26</h2>
|
||
<ul>
|
||
<li>Bosede finally got back to me about the IITA records from earlier last month (<a href="https://dspacetest.cgiar.org/handle/10568/106567">IITA_201907_Jan13</a>)
|
||
<ul>
|
||
<li>She said she has added more information to fifty-three of the journal articles, as I had requested</li>
|
||
</ul>
|
||
</li>
|
||
<li>I tried to migrate the 2019 Solr statistics again on CGSpace because the automatic sharding failed last month:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xmx512m"
|
||
$ schedtool -D -e ionice -c2 -n7 dspace stats-util -s >> log/cron-stats-util.log.$(date --iso-8601)
|
||
</code></pre><ul>
|
||
<li>Interestingly I saw this in the Solr log:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>2020-02-26 08:55:47,433 INFO org.apache.solr.core.SolrCore @ [statistics-2019] Opening new SolrCore at [dspace]/solr/statistics/, dataDir=[dspace]/solr/statistics-2019/data/
|
||
2020-02-26 08:55:47,511 INFO org.apache.solr.servlet.SolrDispatchFilter @ [admin] webapp=null path=/admin/cores params={dataDir=[dspace]/solr/statistics-2019/data&name=statistics-2019&action=CREATE&instanceDir=statistics&wt=javabin&version=2} status=0 QTime=590
|
||
</code></pre><ul>
|
||
<li>The process has been going for several hours now and I suspect it will fail eventually
|
||
<ul>
|
||
<li>I want to explore manually creating and migrating the core</li>
|
||
</ul>
|
||
</li>
|
||
<li>Manually create a core in the DSpace 6.4-SNAPSHOT instance on my local environment:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ curl 'http://localhost:8080/solr/admin/cores?action=CREATE&name=statistics-2019&instanceDir=/home/aorth/dspace63/solr/statistics&dataDir=/home/aorth/dspace63/solr/statistics-2019/data'
|
||
</code></pre><ul>
|
||
<li>After that the <code>statistics-2019</code> core was immediately available in the Solr UI, but after restarting Tomcat it was gone
|
||
<ul>
|
||
<li>I wonder if I import some old statistics into the current <code>statistics</code> core and then let DSpace create the <code>statistics-2019</code> core itself using <code>dspace stats-util -s</code> will work…</li>
|
||
</ul>
|
||
</li>
|
||
<li>First export a small slice of 2019 stats from the main CGSpace <code>statistics</code> core, skipping Atmire schema additions:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./run.sh -s http://localhost:8081/solr/statistics -a export -o /tmp/statistics-2019-01-16.json -f 'time:2019-01-16*' -k uid -S author_mtdt,author_mtdt_search,iso_mtdt_search,iso_mtdt,subject_mtdt,subject_mtdt_search,containerCollection,containerCommunity,containerItem,countryCode_ngram,countryCode_search,cua_version,dateYear,dateYearMonth,geoipcountrycode,ip_ngram,ip_search,isArchived,isInternal,isWithdrawn,containerBitstream,file_id,referrer_ngram,referrer_search,userAgent_ngram,userAgent_search,version_id,complete_query,complete_query_search,filterquery,ngram_query_search,ngram_simplequery_search,simple_query,simple_query_search,range,rangeDescription,rangeDescription_ngram,rangeDescription_search,range_ngram,range_search,actingGroupId,actorMemberGroupId,bitstreamCount,solr_update_time_stamp,bitstreamId
|
||
</code></pre><ul>
|
||
<li>Then import into my local <code>statistics</code> core:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>$ ./run.sh -s http://localhost:8080/solr/statistics -a import -o ~/Downloads/statistics-2019-01-16.json -k uid
|
||
$ ~/dspace63/bin/dspace stats-util -s
|
||
Moving: 21993 into core statistics-2019
|
||
</code></pre><ul>
|
||
<li>To my surprise, the <code>statistics-2019</code> core is created and the documents are immediately visible in the Solr UI!
|
||
<ul>
|
||
<li>Also, I am able to see the stats in DSpace’s default “View Usage Statistics” screen</li>
|
||
<li>Items appear with the words “(legacy)” at the end, ie “Improving farming practices in flood-prone areas in the Solomon Islands(legacy)”</li>
|
||
<li>Interestingly, if I make a bunch of requests for that item they will not be recognized as the same item, showing up as “Improving farming practices in flood-prone areas in the Solomon Islands” without the the legacy identifier</li>
|
||
<li>I need to remember to test out the <a href="https://wiki.lyrasis.org/display/DSDOC6x/SOLR+Statistics+Maintenance#SOLRStatisticsMaintenance-UpgradeLegacyDSpaceObjectIdentifiers(pre-6xstatistics)toDSpace6xUUIDIdentifiers">SolrUpgradePre6xStatistics tool</a></li>
|
||
</ul>
|
||
</li>
|
||
<li>After restarting my local Tomcat on DSpace 6.4-SNAPSHOT the <code>statistics-2019</code> core loaded up…
|
||
<ul>
|
||
<li>I wonder what the difference is between the core I created vs the one created by <code>stats-util</code>?</li>
|
||
<li>I’m honestly considering just moving everything back into one core…</li>
|
||
<li>Or perhaps I can export all the stats for 2019 by month, then delete everything, re-import each month, and migrate them with stats-util</li>
|
||
</ul>
|
||
</li>
|
||
<li>A few hours later the sharding has completed successfully so I guess I don’t have to worry about this any more for now, though I’m seriously considering moving all my data back into the one statistics core</li>
|
||
<li>Testing some <a href="https://wiki.lyrasis.org/display/DSPACE/DSpace+6.4+Release+Status">proposed patches for 6.4</a> in my local <code>6_x-dev64</code> branch</li>
|
||
<li><a href="https://jira.lyrasis.org/browse/DS-4135">DS-4135 (citation author UTF-8)</a>
|
||
<ul>
|
||
<li>Testing <a href="https://hdl.handle.net/10568/106959">item 10568/106959</a> before and after:</li>
|
||
</ul>
|
||
</li>
|
||
</ul>
|
||
<pre tabindex="0"><code><meta content="Thu hoạch v&agrave; bảo quản c&agrave; ph&ecirc; ch&egrave; đ&uacute;ng kỹ thuật (Harvesting and storing Arabica coffee)" name="citation_title">
|
||
<meta name="citation_title" content="Thu hoạch và bảo quản cà phê chè đúng kỹ thuật (Harvesting and storing Arabica coffee)" />
|
||
</code></pre><ul>
|
||
<li><a href="https://jira.lyrasis.org/browse/DS-4397">DS-4397 controlled vocabulary loading speedup</a></li>
|
||
</ul>
|
||
<h2 id="2020-02-27">2020-02-27</h2>
|
||
<ul>
|
||
<li>Tezira startd a discussion on Yammer about the ISI Journal field
|
||
<ul>
|
||
<li>She and Abenet both insist that selecting <code>N/A</code> for the “Journal status” in the submission form makes the item show <!-- raw HTML omitted -->ISI Journal<!-- raw HTML omitted --> on the item display page</li>
|
||
<li>I told them that the <code>N/A</code> does not store a value so this is impossible</li>
|
||
<li>I tested it to be sure on DSpace Test, and it does not show a value…</li>
|
||
<li>I checked this morning’s database snapshot and found three items that had a value of <code>N/A</code>, but they have already been fixed manually on CGSpace by Abenet or Tezira</li>
|
||
<li>I re-worded the <code>N/A</code> to say “Non-ISI Journal” in the submission form, though it still does not store a value</li>
|
||
</ul>
|
||
</li>
|
||
<li>I tested the one last remaining issue with our <code>6.x-dev</code> branch: the export CSV from search results
|
||
<ul>
|
||
<li>Last time I had tried that it didn’t work for some reason</li>
|
||
<li>Now I will <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=706">tell Atmire to get started</a></li>
|
||
</ul>
|
||
</li>
|
||
<li>I added some debugging to the Solr core loading in DSpace 6.4-SNAPSHOT (<code>SolrLoggerServiceImpl.java</code>) and I see this when DSpace starts up now:</li>
|
||
</ul>
|
||
<pre tabindex="0"><code>2020-02-27 12:26:35,695 INFO org.dspace.statistics.SolrLoggerServiceImpl @ Alan Ping of Solr Core [statistics-2019] Failed with [org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException]. New Core Will be Created
|
||
</code></pre><ul>
|
||
<li>When I check Solr I see the <code>statistics-2019</code> core loaded (from <code>stats-util -s</code> yesterday, not manually created)</li>
|
||
</ul>
|
||
<!-- raw HTML omitted -->
|
||
|
||
|
||
|
||
|
||
|
||
</article>
|
||
|
||
|
||
|
||
</div> <!-- /.blog-main -->
|
||
|
||
<aside class="col-sm-3 ml-auto blog-sidebar">
|
||
|
||
|
||
|
||
<section class="sidebar-module">
|
||
<h4>Recent Posts</h4>
|
||
<ol class="list-unstyled">
|
||
|
||
|
||
<li><a href="/cgspace-notes/2022-06/">June, 2022</a></li>
|
||
|
||
<li><a href="/cgspace-notes/2022-05/">May, 2022</a></li>
|
||
|
||
<li><a href="/cgspace-notes/2022-04/">April, 2022</a></li>
|
||
|
||
<li><a href="/cgspace-notes/2022-03/">March, 2022</a></li>
|
||
|
||
<li><a href="/cgspace-notes/2022-02/">February, 2022</a></li>
|
||
|
||
</ol>
|
||
</section>
|
||
|
||
|
||
|
||
|
||
<section class="sidebar-module">
|
||
<h4>Links</h4>
|
||
<ol class="list-unstyled">
|
||
|
||
<li><a href="https://cgspace.cgiar.org">CGSpace</a></li>
|
||
|
||
<li><a href="https://dspacetest.cgiar.org">DSpace Test</a></li>
|
||
|
||
<li><a href="https://github.com/ilri/DSpace">CGSpace @ GitHub</a></li>
|
||
|
||
</ol>
|
||
</section>
|
||
|
||
</aside>
|
||
|
||
|
||
</div> <!-- /.row -->
|
||
</div> <!-- /.container -->
|
||
|
||
|
||
|
||
<footer class="blog-footer">
|
||
<p dir="auto">
|
||
|
||
Blog template created by <a href="https://twitter.com/mdo">@mdo</a>, ported to Hugo by <a href='https://twitter.com/mralanorth'>@mralanorth</a>.
|
||
|
||
</p>
|
||
<p>
|
||
<a href="#">Back to top</a>
|
||
</p>
|
||
</footer>
|
||
|
||
|
||
</body>
|
||
|
||
</html>
|