The logic to get views and downloads is very similar to that used
for items, but we facet by different fields. This uses a generic
function for indexing that takes an "indexType" and a "facetField"
parameter. The indexType parameter controls which database table
to insert into, and the facetField parameter indicates which field
to facet by in Solr.
A few months ago I had an issue setting up mocking because I was
trying to be clever importing these libraries only when I needed
them rather than at the global scope. Someone pointed out to me
that if the imports are at the top of the file Falcon will load
them once when the WSGI server starts, whereas if they are in the
on_get() or on_post() they will load for every request! Also, it
seems that PEP8 recommends keeping imports at the top of the file
anyways, so I will just do that.
Imports sorted with isort.
See: https://www.python.org/dev/peps/pep-0008/#imports
When indexing item views and downloads the only field we need is the
the id. The `fl` parameter tells Solr which fields to return in the
search results. This should theoretically be more efficient, though
I don't have any time to figure out how to measure it right now.
Minor change to bot filtering. We should use a negated match for
documents that have `isBot:true` rather than looking for documents
that are tagged with `isBot:false` (the distinction is subtle, but
important).
Move util import from global scope because it causes tests to fail.
We don't need the set up the Solr connection unless we're actually
trying to use the get_views and get_downloads methods, either when
running the API in production or during tests where the connection
has been set up.
I thought it was clever to only import these in the on_post handler
because they aren't needed elsewhere, but it turns out that this is
not a common pattern and even causes problems with testability.
First, if the imports are at the top of the file as PEP8 recommends,
then the WSGI server will import them once when it loads the app and
they remain in memory for the lifecycle of the app. If the imports
are in the on_post handler they would be re-imported on every request!
Second, this pattern of importing in a method makes it tricky to use
object patching in mocks.
See: https://www.python.org/dev/peps/pep-0008/#imports
According to flake8 we need to use a different syntax for strings
with backslash escape sequences:
> As of Python 3.6, a backslash-character pair that is not a valid
> escape sequence now generates a DeprecationWarning. This will
> eventually become a SyntaxError.
The warning was:
W605 invalid escape sequence '\-'
See: https://www.flake8rules.com/rules/W605.html
I don't remember why we needed the stats, but it seems that it was
because without them there is no way to know how many results were
returned and therefore no way to know how many pages we'll need to
iterate over. Having the total number allows us to use a limit and
and offset to page through them deterministically.
You can now POST a JSON request to /items with a list of items and
a date range. This allows the possibility to get view and download
statistics for arbitrary items and arbitrary date ranges.
The JSON request should be in the following format:
{
"limit": 100,
"page": 0,
"dateFrom": "2020-01-01T00:00:00Z",
"dateTo": "2020-09-09T00:00:00Z",
"items": [
"f44cf173-2344-4eb2-8f00-ee55df32c76f",
"2324aa41-e9de-4a2b-bc36-16241464683e",
"8542f9da-9ce1-4614-abf4-f2e3fdb4b305",
"0fe573e7-042a-4240-a4d9-753b61233908"
]
}
The limit, page, and date parameters are all optional. By default
it will use a limit of 100, page 0, and [* TO *] Solr date range.
We had previously been avoiding the f-strings because we needed to
run on Python 3.5 and they were only available in Python 3.6+, but
now the black formatter requires Python 3.6 and all our systems are
running Python 3.6+ anyways.
DSpace 6+ uses a UUID for item identifiers instead of an integer so
we need to adapt our PostgreSQL queries to use those. Note that we
can no longer sort results in the "all items" endpoint by ID. Also,
we need to use parameterized psycopg2 queries instead of strings to
support queries with UUIDs properly. To use the Python UUID objects
elsewhere in the code we need to make sure that we cast them to str.
DSpace 6+ uses a UUID for item identifiers instead of an integer so
we need to update the PostgreSQL schema accordingly. Solr still re-
fers to them as "id" in its schema so we don't need to change anyt-
hing there.
The SolrClient library is unmaintained, which is starting to cause
problems due to the moving Python ecosystem. Switching to requests
does not change my code in any meaningful way and makes maintenance
easier.
DSpace's stats-util script splits the Solr statistics core into yearly
shards. We need to use Solr's `shards` query parameter in order to get
the statistics for previous years. This commit adds a helper function
to enumerate the active Solr cores to find yearly shards matching the
statistics-YYYY pattern and add them to the query.