Basically Solr's numFound has nothing to do with the actual number
of distinct facets that are returned. You need to use Solr's stats
component to get the number of distinct facets, aka countDistinct.
This is apparently deprecated in newer Solr versions, but we're on
version 4.10 and it works there.
Also, I realized that there is no need to return facets for items
without any views or downloads. Using facet.mincount=1 reduces the
result set size and also means we can store less data in the data-
base. The API returns HTTP 404 Not Found if an item is not in the
database anyways.
I can't figure it out exactly, but there is some weird issue with
Solr's facet results when you don't use facet.mincount=1. For some
reason you get tons of results with an id that doesn't even exist
in the document database, let alone as an actual DSpace item!
See: https://lucene.apache.org/solr/guide/6_6/the-stats-component.html
You can test OnCalendar strings using systemd-analyze calendar, eg:
# systemd-analyze calendar '*-*-* 06:00:00,18:00:00'
Failed to parse calendar specification '*-*-* 06:00:00,18:00:00':
Invalid argument
# systemd-analyze calendar '*-*-* 06,18:00:00'
Normalized form: *-*-* 06,18:00:00
Next elapse: Wed 2018-09-26 06:00:00 EEST
(in UTC): Wed 2018-09-26 03:00:00 UTC
From now: 6h left
SolrClient 0.2.1 currently depends on kazoo 2.2.1, but there is an
issue with Python 3.7 in kazoo <= 2.5.0. Kazoo 2.5.0 fixes the is-
sue with Python 3.7, and for my limited usage of SolrClient it se-
ems to work fine.
See: https://github.com/moonlitesolutions/SolrClient/issues/79
Generated with `pip freeze`. This is so I can pin the versions of
packages that I've tested with as well as to allow Travis to test
whether the project runs on various Pythons and to let GitHub in-
form me of vulnerabilities in some libraries.
This allows us to access records using their column name. I didn't
notice that this was not working, as I had been testing the wrong
server!
See: http://initd.org/psycopg/docs/extras.html
I've decided to use PostgreSQL instead of SQLite because the UPSERT
support is available in versions of PostgreSQL we're alread running,
whereas SQLite needs a VERY new (3.24.0) version that is not avail-
able on any recent long-term support Ubuntu releases.
I was very surprised how easy and fast and robust SQLite was, but in
the end I realized that its UPSERT support only came in version 3.24
and both Ubuntu 16.04 and 18.04 have older versions than that! I did
manage to install libsqlite3-0 from Ubuntu 18.04 cosmic on my xenial
host, but that feels dirty.
PostgreSQL has support for UPSERT since 9.5, not to mention the same
nice LIMIT and OFFSET clauses.
This route exposes all item statistics and uses the limit and offset
parameters to control paging throug the result set. The logic here
is extremely easy thanks to the brilliant LIMIT and OFFSET features
of SQLite (of course the SQL query sorts the results by some unique
field to ensure the order is already the same).
The indexer.py script was updated to use a single table because I
learned about UPSERT. This simplifies the database schema and the
Python logic, and makes it easier to page all views and downloads
at once without complicated JOIN queries.
I was using two separate tables for item views and downloads without
realizing that SQLite didn't support FULL OUTER JOIN, which would be
needed to get views and downloads for a given item in a single query.
Instead I can use one table with a default value of 0 for both views
and downloads, and then use "UPSERT" to populate the statistics. This
is a newish SQL concept that allows you to attempt an INSERT and then
specify an action to perform in case of conflict. This works well in
SQLite and actually simplifies my Python logic greatly!
Note that the "excluded" table qualifier is a special keyword that
allows you to reference the value that would have been inserted.
See: https://www.sqlite.org/lang_UPSERT.html