This was inserting correctly on the first run, but subsequent runs
were inserting into the incorrect column on conflict. This made it
seem like there were downloads for items where there were none.
Batch inserts are much faster than a series of individual inserts
because they drastically reduce the overhead caused by round-trip
communication with the server. My tests in development confirm:
- cursor.execute(): 19 seconds
- execute_values(): 14 seconds
I'm currently only working with 4,500 rows, but I will experiment
with larger data sets, as well as larger batches. For example, on
the PostgreSQL mailing list a user reports doing 10,000 rows with
a page size of 100.
See: http://initd.org/psycopg/docs/extras.html#psycopg2.extras.execute_values
See: https://github.com/psycopg/psycopg2/issues/491#issuecomment-276551038
Basically Solr's numFound has nothing to do with the actual number
of distinct facets that are returned. You need to use Solr's stats
component to get the number of distinct facets, aka countDistinct.
This is apparently deprecated in newer Solr versions, but we're on
version 4.10 and it works there.
Also, I realized that there is no need to return facets for items
without any views or downloads. Using facet.mincount=1 reduces the
result set size and also means we can store less data in the data-
base. The API returns HTTP 404 Not Found if an item is not in the
database anyways.
I can't figure it out exactly, but there is some weird issue with
Solr's facet results when you don't use facet.mincount=1. For some
reason you get tons of results with an id that doesn't even exist
in the document database, let alone as an actual DSpace item!
See: https://lucene.apache.org/solr/guide/6_6/the-stats-component.html
I've decided to use PostgreSQL instead of SQLite because the UPSERT
support is available in versions of PostgreSQL we're alread running,
whereas SQLite needs a VERY new (3.24.0) version that is not avail-
able on any recent long-term support Ubuntu releases.
I was very surprised how easy and fast and robust SQLite was, but in
the end I realized that its UPSERT support only came in version 3.24
and both Ubuntu 16.04 and 18.04 have older versions than that! I did
manage to install libsqlite3-0 from Ubuntu 18.04 cosmic on my xenial
host, but that feels dirty.
PostgreSQL has support for UPSERT since 9.5, not to mention the same
nice LIMIT and OFFSET clauses.
I was using two separate tables for item views and downloads without
realizing that SQLite didn't support FULL OUTER JOIN, which would be
needed to get views and downloads for a given item in a single query.
Instead I can use one table with a default value of 0 for both views
and downloads, and then use "UPSERT" to populate the statistics. This
is a newish SQL concept that allows you to attempt an INSERT and then
specify an action to perform in case of conflict. This works well in
SQLite and actually simplifies my Python logic greatly!
Note that the "excluded" table qualifier is a special keyword that
allows you to reference the value that would have been inserted.
See: https://www.sqlite.org/lang_UPSERT.html