- And it seems that we need to enabled `pg_crypto` now (used for UUIDs):
```
$ psql -h localhost -U postgres dspace63
dspace63=# CREATE EXTENSION pgcrypto;
CREATE EXTENSION pgcrypto;
```
- I tried importing a PostgreSQL snapshot from CGSpace and had errors due to missing Atmire database migrations
- If I try to run `dspace database migrate` I get the IDs of the migrations that are missing
- I delete them manually in psql:
```
dspace63=# DELETE FROM schema_version WHERE version IN ('5.0.2015.01.27', '5.6.2015.12.03.2', '5.6.2016.08.08', '5.0.2017.04.28', '5.0.2017.09.25', '5.8.2015.12.03.3');
```
- Then I ran `dspace database migrate` and got an error:
Statement : ALTER TABLE metadatavalue DROP COLUMN IF EXISTS resource_id
at org.flywaydb.core.internal.dbsupport.SqlScript.execute(SqlScript.java:117)
at org.flywaydb.core.internal.resolver.sql.SqlMigrationExecutor.execute(SqlMigrationExecutor.java:71)
at org.flywaydb.core.internal.command.DbMigrate.doMigrate(DbMigrate.java:352)
at org.flywaydb.core.internal.command.DbMigrate.access$1100(DbMigrate.java:47)
at org.flywaydb.core.internal.command.DbMigrate$4.doInTransaction(DbMigrate.java:308)
at org.flywaydb.core.internal.util.jdbc.TransactionTemplate.execute(TransactionTemplate.java:72)
at org.flywaydb.core.internal.command.DbMigrate.applyMigration(DbMigrate.java:305)
at org.flywaydb.core.internal.command.DbMigrate.access$1000(DbMigrate.java:47)
at org.flywaydb.core.internal.command.DbMigrate$2.doInTransaction(DbMigrate.java:230)
at org.flywaydb.core.internal.command.DbMigrate$2.doInTransaction(DbMigrate.java:173)
at org.flywaydb.core.internal.util.jdbc.TransactionTemplate.execute(TransactionTemplate.java:72)
at org.flywaydb.core.internal.command.DbMigrate.migrate(DbMigrate.java:173)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:959)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:917)
at org.flywaydb.core.Flyway.execute(Flyway.java:1373)
at org.flywaydb.core.Flyway.migrate(Flyway.java:917)
at org.dspace.storage.rdbms.DatabaseUtils.updateDatabase(DatabaseUtils.java:662)
... 8 more
Caused by: org.postgresql.util.PSQLException: ERROR: cannot drop table metadatavalue column resource_id because other objects depend on it
Detail: view eperson_metadata depends on table metadatavalue column resource_id
Hint: Use DROP ... CASCADE to drop the dependent objects too.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2422)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2167)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
at org.flywaydb.core.internal.dbsupport.JdbcTemplate.executeStatement(JdbcTemplate.java:238)
at org.flywaydb.core.internal.dbsupport.SqlScript.execute(SqlScript.java:114)
... 24 more
```
- I think I might need to update the sequences first... nope
- Perhaps it's due to some missing bitstream IDs and I need to run `dspace cleanup` on CGSpace and take a new PostgreSQL dump... nope
- A thread on the dspace-tech mailing list regarding this migration noticed that his database had some views created that were using the `resource_id` column
- Our database had the same issue, where the `eperson_metadata` view was created by something (Atmire module?) but has no references in the vanilla DSpace code, so I dropped it and tried the migration again:
```
dspace63=# DROP VIEW eperson_metadata;
DROP VIEW
```
- After that the migration was successful and DSpace starts up successfully and begins indexing
- xmlui, solr, jspui, rest, and oai are working (rest was redirecting to HTTPS, so I set the Tomcat connector to `secure="true"` and it fixed it on localhost, but caused other issues so I disabled it for now)
dspace63=# DELETE FROM schema_version WHERE version IN ('5.0.2015.01.27', '5.6.2015.12.03.2', '5.6.2016.08.08', '5.0.2017.04.28', '5.0.2017.09.25', '5.8.2015.12.03.3');
- I notice that the indexing doesn't work correctly if I start it manually with `dspace index-discovery -b` (search.resourceid becomes an integer!)
- If I induce an indexing by touching `dspace/solr/search/conf/reindex.flag` the search.resourceid are all UUIDs...
- Speaking of database stuff, there was a performance-related update for the [indexes that we used in DSpace 5](https://github.com/DSpace/DSpace/pull/1791/)
- We might want to [apply it in DSpace 6](https://github.com/DSpace/DSpace/pull/1792), as it was never merged to 6.x, but it helped with the performance of `/submissions` in XMLUI for us in [2018-03]({{< relref path="2018-03.md" >}})
- The indexing issue I was having yesterday seems to only present itself the first time a new installation is running DSpace 6
- Once the indexing induced by touching `dspace/solr/search/conf/reindex.flag` has finished, subsequent manual invocations of `dspace index-discovery -b` work as expected
- Nevertheless, I sent a message to the dspace-tech mailing list describing the issue to see if anyone has any comments
- I am seeing that the number of important commits on the unreleased DSpace 6.4 are really numerous and it might be better for us to target that version
- I did a simple test and it's easy to rebase my current 6.3 branch on top of the upstream `dspace-6_x` branch:
```
$ git checkout -b 6_x-dev64 6_x-dev
$ git rebase -i upstream/dspace-6_x
```
- I finally understand why our themes show all the "Browse by" buttons on community and collection pages in DSpace 6.x
- The code in `./dspace-xmlui/src/main/java/org/dspace/app/xmlui/aspect/browseArtifacts/CommunityBrowse.java` iterates over all the browse indexes and prints them when it is called
- The XMLUI theme code in `dspace/modules/xmlui-mirage2/src/main/webapp/themes/0_CGIAR/xsl/preprocess/browse.xsl` calls the template because the id of the div matches "aspect.browseArtifacts.CommunityBrowse.list.community-browse"
- I checked the DRI of a community page on my local 6.x and DSpace Test 5.x by appending `?XML` to the URL and I see the ID is missing on DSpace 5.x
- The issue is the same with the ordering of the "My Account" link, but in Navigation.java
- I tried modifying `preprocess/browse.xsl` but it always ends up printing some default list of browse by links...
- I'm starting to wonder if Atmire's modules somehow override this, as I don't see how `CommunityBrowse.java` can behave like ours on DSpace 5.x unless they have overridden it (as the open source code is the same in 5.x and 6.x)
- At least the "account" link in the sidebar is overridden in our 5.x branch because Atmire copied a modified `Navigation.java` to the local xmlui modules folder... so that explains that (and it's easy to replicate in 6.x)
- Checking out the DSpace 6.x REST API query client
- There is a [tutorial](https://terrywbrady.github.io/restReportTutorial/intro) that explains how it works and I see it is very powerful because you can export a CSV of results in order to fix and re-upload them with batch import!
- Custom queries can be added in `dspace-rest/src/main/webapp/static/reports/restQueryReport.js`
- I noticed two new bots in the logs with the following user agents:
-`Jersey/2.6 (HttpUrlConnection 1.8.0_152)`
-`magpie-crawler/1.1 (U; Linux amd64; en-GB; +http://www.brandwatch.net)`
- I filed an [issue to add Jersey to the COUNTER-Robots](https://github.com/atmire/COUNTER-Robots/issues/30) list
- Peter noticed that the statlets on community, collection, and item pages aren't working on CGSpace
- I thought it might be related to the fact that the yearly sharding didn't complete successfully this year so the `statistics-2019` core is empty
- I removed the `statistics-2019` core and had to restart Tomcat like six times before all cores would load properly (ugh!!!!)
- After that the statlets were working properly...
- Run all system updates on DSpace Test (linode19) and restart it
dspace63=# DELETE FROM schema_version WHERE version IN ('5.0.2015.01.27', '5.6.2015.12.03.2', '5.6.2016.08.08', '5.0.2017.04.28', '5.0.2017.09.25', '5.8.2015.12.03.3');
dspace63=# DROP VIEW eperson_metadata;
dspace63=# \q
```
- I purged ~33,000 hits from the "Jersey/2.6" bot in CGSpace's statistics using my `check-spider-hits.sh` script:
- I found a [nice tool for exporting and importing Solr records](https://github.com/freedev/solr-import-export-json) and it seems to work for exporting our 2019 stats from the large statistics core!
- Follow up with [Atmire about DSpace 6.x upgrade](https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=706)
- I raised the issue of targetting 6.4-SNAPSHOT as well as the Discovery indexing performance issues in 6.x
## 2020-02-11
- Maria from Bioversity asked me to add some ORCID iDs to our controlled vocabulary so I combined them with our existing ones and updated the names from the ORCID API:
- Udana from IWMI asked about the OAI base URL for their community on CGSpace
- I think it should be this: https://cgspace.cgiar.org/oai/request?verb=ListRecords&metadataPrefix=oai_dc&set=com_10568_16814
## 2020-02-19
- I noticed a thread on the mailing list about the Tomcat header size and Solr max boolean clauses error
- The solution is to do as we have done and increase the headers / boolean clauses, or to simply [disable access rights awareness](https://wiki.lyrasis.org/display/DSPACE/TechnicalFaq#TechnicalFAQ-I'mgetting%22SolrException:BadRequest%22followedbyalongqueryora%22tooManyClauses%22Exception) in Discovery
- I applied the fix to the `5_x-prod` branch and cherry-picked it to `6_x-dev`
- Upgrade Tomcat from 7.0.99 to 7.0.100 in [Ansible infrastructure playbooks](https://github.com/ilri/rmg-ansible-public)
- Upgrade PostgreSQL JDBC driver from 42.2.9 to 42.2.10 in [Ansible infrastructure playbooks](https://github.com/ilri/rmg-ansible-public)
- Run Tomcat and PostgreSQL JDBC driver updates on DSpace Test (linode19)
- And removing the extra jfreechart library and restarting Tomcat I was able to load the usage statistics graph on DSpace Test...
- Hmm, actually I think this is an Java bug, perhaps introduced or at [least present in 18.04](https://bugs.openjdk.java.net/browse/JDK-8204862), with lots of [references](https://code-maven.com/slides/jenkins-intro/no-graph-error) to it [happening in other](https://issues.jenkins-ci.org/browse/JENKINS-39636) configurations like Debian 9 with Jenkins, etc...
- Apparently if you use the *non-headless* version of openjdk this doesn't happen... but that pulls in X11 stuff so no thanks
- Also, I see dozens of occurences of this going back over one month (we have logs for about that period):
```
# grep -c 'initialize class org.jfree.chart.JFreeChart' dspace.log.2020-0*
dspace.log.2020-01-12:4
dspace.log.2020-01-13:66
dspace.log.2020-01-14:4
dspace.log.2020-01-15:36
dspace.log.2020-01-16:88
dspace.log.2020-01-17:4
dspace.log.2020-01-18:4
dspace.log.2020-01-19:4
dspace.log.2020-01-20:4
dspace.log.2020-01-21:4
...
```
- I deployed the fix on CGSpace (linode18) and I was able to see the graphs in the Atmire CUA Usage Statistics...
- On an unrelated note there is something weird going on in that I see millions of hits from IP 34.218.226.147 in Solr statistics, but if I remember correctly that IP belongs to CodeObia's AReS explorer, but it should only be using REST and therefore no Solr statistics...?
- I'm a little suspicious of the 2012, 2013, and 2014 numbers, though
- I should facet those years by IP and see if any stand out...
- The next thing I need to do is figure out why the nginx IP to bot mapping isn't working...
- Actually, and I've probably learned this before, but the bot mapping is working, but nginx only logs the real user agent (of course!), as I'm only using the mapped one in the proxy pass...
- This trick for adding a header with the mapped "ua" variable is nice:
```
add_header X-debug-message "ua is $ua" always;
```
- Then in the HTTP response you see:
```
X-debug-message: ua is bot
```
- So the IP to bot mapping is working, phew.
- More bad news, I checked the remaining IPs in our existing bot IP mapping, and there are statistics registered for them!
- For example, ciat.cgiar.org was previously 104.196.152.243, but it is now 35.237.175.180, which I had noticed as a "mystery" client on Google Cloud in 2018-09
- Others I should probably add to the nginx bot map list are:
- wle.cgiar.org (70.32.90.172)
- ccafs.cgiar.org (205.186.128.185)
- another CIAT scraper using the PHP GuzzleHttp library (45.5.184.72)
Purging 462 hits from 70.32.99.142 in statistics-2014
Purging 1766 hits from 50.115.121.196 in statistics-2014
Total number of bot hits purged: 2228
```
- Then I purged about 200,000 Baidu hits from the 2015 to 2019 statistics cores with a few manual delete queries because they didn't have a proper user agent and the only way to identify them was via DNS:
Purging 1 hits from 143.233.242.130 in statistics-2015
Purging 14109 hits from 83.103.94.48 in statistics-2015
Total number of bot hits purged: 14110
```
- Though looking in my REST logs for the last month I am second guessing my judgement on 45.5.186.2 because I see user agents like "Microsoft Office Word 2014"
- Actually no, the overwhelming majority of these are coming from something harvesting the REST API with no user agent:
1 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36
2 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36
3 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36
24 GuzzleHttp/6.3.3 curl/7.59.0 PHP/7.0.31
34 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36
98 Apache-HttpClient/4.3.4 (java 1.5)
54850 -
```
- I see lots of requests coming from the following user agents:
```
"Apache-HttpClient/4.5.7 (Java/11.0.3)"
"Apache-HttpClient/4.5.7 (Java/11.0.2)"
"LinkedInBot/1.0 (compatible; Mozilla/5.0; Jakarta Commons-HttpClient/4.3 +http://www.linkedin.com)"
"EventMachine HttpClient"
```
- I should definitely add HttpClient to the bot user agents...
- Also, while `bot`, `spider`, and `crawl` are in the pattern list already and can be used for case-insensitive matching when used by DSpace in Java, I can't do case-insensitive matching in Solr with `check-spider-hits.sh`
- I need to add `Bot`, `Spider`, and `Crawl` to my local user agent file to purge them
- Also, I see lots of hits from "Indy Library", which we've been blocking for a long time, but somehow these got through (I think it's the Greek guys using Delphi)
- Somehow my regex conversion isn't working in check-spider-hits.sh, but "*Indy*" will work for now
- Purging just these case-sensitive patterns removed ~1 million more hits from 2011 to 2020
- And what's the 950,000 hits from Online.net IPs with the following user agent:
```
Mozilla/5.0 ((Windows; U; Windows NT 6.1; fr; rv:1.9.2) Gecko/20100115 Firefox/3.6)
```
- Over half of the requests were to Discover and Browse pages, and the rest were to actual item pages, but they were within seconds of each other, so I'm purging them all
IZaBEE/IZaBEE-1.01 (Buzzing Abound The Web; https://izabee.com; info at izabee dot com)
Twurly v1.1 (https://twurly.org)
okhttp/3.11.0
okhttp/3.10.0
Pattern/2.6 +http://www.clips.ua.ac.be/pattern
Link Check; EPrints 3.3.x;
CyotekWebCopy/1.7 CyotekHTTP/2.0
Adestra Link Checker: http://www.adestra.co.uk
HTTPie/1.0.2
```
- I notice that some of these would be matched by the COUNTER-Robots list when DSpace uses it in Java because there we have more robust (and case-insensitive) matching
- I created a temporary file of some of the patterns and converted them to use capitalization so I could run them through `check-spider-hits.sh`
- One benefit of all this is that the size of the statistics Solr core has reduced by 6GiB since yesterday, though I can't remember how big it was before that
- According to my notes it was 43GiB in January when it failed the first time
2020-02-26 08:55:47,433 INFO org.apache.solr.core.SolrCore @ [statistics-2019] Opening new SolrCore at [dspace]/solr/statistics/, dataDir=[dspace]/solr/statistics-2019/data/
- After that the `statistics-2019` core was immediately available in the Solr UI, but after restarting Tomcat it was gone
- I wonder if I import some old statistics into the current `statistics` core and then let DSpace create the `statistics-2019` core itself using `dspace stats-util -s` will work...
- First export a small slice of 2019 stats from the main CGSpace `statistics` core, skipping Atmire schema additions:
$ ./run.sh -s http://localhost:8080/solr/statistics -a import -o ~/Downloads/statistics-2019-01-16.json -k uid
$ ~/dspace63/bin/dspace stats-util -s
Moving: 21993 into core statistics-2019
```
- To my surprise, the `statistics-2019` core is created and the documents are immediately visible in the Solr UI!
- Also, I am able to see the stats in DSpace's default "View Usage Statistics" screen
- Items appear with the words "(legacy)" at the end, ie "Improving farming practices in flood-prone areas in the Solomon Islands(legacy)"
- Interestingly, if I make a bunch of requests for that item they will not be recognized as the same item, showing up as "Improving farming practices in flood-prone areas in the Solomon Islands" without the the legacy identifier
- I need to remember to test out the [SolrUpgradePre6xStatistics tool](https://wiki.lyrasis.org/display/DSDOC6x/SOLR+Statistics+Maintenance#SOLRStatisticsMaintenance-UpgradeLegacyDSpaceObjectIdentifiers(pre-6xstatistics)toDSpace6xUUIDIdentifiers)
- After restarting my local Tomcat on DSpace 6.4-SNAPSHOT the `statistics-2019` core loaded up...
- I wonder what the difference is between the core I created vs the one created by `stats-util`?
- I'm honestly considering just moving everything back into one core...
- Or perhaps I can export all the stats for 2019 by month, then delete everything, re-import each month, and migrate them with stats-util
- A few hours later the sharding has completed successfully so I guess I don't have to worry about this any more for now, though I'm seriously considering moving all my data back into the one statistics core
- Tezira startd a discussion on Yammer about the ISI Journal field
- She and Abenet both insist that selecting `N/A` for the "Journal status" in the submission form makes the item show <strong>ISI Journal</strong> on the item display page
- I told them that the `N/A` does not store a value so this is impossible
- I tested it to be sure on DSpace Test, and it does not show a value...
- I checked this morning's database snapshot and found three items that had a value of `N/A`, but they have already been fixed manually on CGSpace by Abenet or Tezira
- I re-worded the `N/A` to say "Non-ISI Journal" in the submission form, though it still does not store a value
- I tested the one last remaining issue with our `6.x-dev` branch: the export CSV from search results
- Last time I had tried that it didn't work for some reason
- Now I will [tell Atmire to get started](https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=706)
- I added some debugging to the Solr core loading in DSpace 6.4-SNAPSHOT (`SolrLoggerServiceImpl.java`) and I see this when DSpace starts up now:
```
2020-02-27 12:26:35,695 INFO org.dspace.statistics.SolrLoggerServiceImpl @ Alan Ping of Solr Core [statistics-2019] Failed with [org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException]. New Core Will be Created
```
- When I check Solr I see the `statistics-2019` core loaded (from `stats-util -s` yesterday, not manually created)