This includes a Swagger UI with an OpenAPI 3.0 JSON schema for easy
interactive demonstration and testing of the API. The JSON schema
was created with the standalone swagger-editor. Includes tests to
make sure that the /swagger and /docs/openapi.json paths are acce-
ssible.
Minor change to bot filtering. We should use a negated match for
documents that have `isBot:true` rather than looking for documents
that are tagged with `isBot:false` (the distinction is subtle, but
important).
The basic logic is similar to items, where you can request single
item statistics with a UUID, all item statistics, and item statis-
tics for a list of items (optionally with a date range). Most of
the item code was re-purposed to work on "elements", which can be
items, communities, or collections depending on the request, with
the use of Falcon's `before` hooks to set the statistics scope so
we know how to behave for the current request.
Other than the minor difference in facet fields, another issue I
had with communities and collections is that the owningComm and
owningColl fields are multi-valued (unlike items' id field). This
means that, when you facet the results of your query, Solr returns
ids that seem unrelated, but are actually present in the field, so
I had to make sure I checked all returned ids to see if they were
in the user's POSTed elements list.
TODO:
- Add tests
- Revise docstrings
- Refactor items.py as it is now generic
The logic to get views and downloads is very similar to that used
for items, but we facet by different fields. This uses a generic
function for indexing that takes an "indexType" and a "facetField"
parameter. The indexType parameter controls which database table
to insert into, and the facetField parameter indicates which field
to facet by in Solr.
A few months ago I had an issue setting up mocking because I was
trying to be clever importing these libraries only when I needed
them rather than at the global scope. Someone pointed out to me
that if the imports are at the top of the file Falcon will load
them once when the WSGI server starts, whereas if they are in the
on_get() or on_post() they will load for every request! Also, it
seems that PEP8 recommends keeping imports at the top of the file
anyways, so I will just do that.
Imports sorted with isort.
See: https://www.python.org/dev/peps/pep-0008/#imports
When indexing item views and downloads the only field we need is the
the id. The `fl` parameter tells Solr which fields to return in the
search results. This should theoretically be more efficient, though
I don't have any time to figure out how to measure it right now.
Minor change to bot filtering. We should use a negated match for
documents that have `isBot:true` rather than looking for documents
that are tagged with `isBot:false` (the distinction is subtle, but
important).
Move util import from global scope because it causes tests to fail.
We don't need the set up the Solr connection unless we're actually
trying to use the get_views and get_downloads methods, either when
running the API in production or during tests where the connection
has been set up.
I thought it was clever to only import these in the on_post handler
because they aren't needed elsewhere, but it turns out that this is
not a common pattern and even causes problems with testability.
First, if the imports are at the top of the file as PEP8 recommends,
then the WSGI server will import them once when it loads the app and
they remain in memory for the lifecycle of the app. If the imports
are in the on_post handler they would be re-imported on every request!
Second, this pattern of importing in a method makes it tricky to use
object patching in mocks.
See: https://www.python.org/dev/peps/pep-0008/#imports
According to flake8 we need to use a different syntax for strings
with backslash escape sequences:
> As of Python 3.6, a backslash-character pair that is not a valid
> escape sequence now generates a DeprecationWarning. This will
> eventually become a SyntaxError.
The warning was:
W605 invalid escape sequence '\-'
See: https://www.flake8rules.com/rules/W605.html
I don't remember why we needed the stats, but it seems that it was
because without them there is no way to know how many results were
returned and therefore no way to know how many pages we'll need to
iterate over. Having the total number allows us to use a limit and
and offset to page through them deterministically.
You can now POST a JSON request to /items with a list of items and
a date range. This allows the possibility to get view and download
statistics for arbitrary items and arbitrary date ranges.
The JSON request should be in the following format:
{
"limit": 100,
"page": 0,
"dateFrom": "2020-01-01T00:00:00Z",
"dateTo": "2020-09-09T00:00:00Z",
"items": [
"f44cf173-2344-4eb2-8f00-ee55df32c76f",
"2324aa41-e9de-4a2b-bc36-16241464683e",
"8542f9da-9ce1-4614-abf4-f2e3fdb4b305",
"0fe573e7-042a-4240-a4d9-753b61233908"
]
}
The limit, page, and date parameters are all optional. By default
it will use a limit of 100, page 0, and [* TO *] Solr date range.
We had previously been avoiding the f-strings because we needed to
run on Python 3.5 and they were only available in Python 3.6+, but
now the black formatter requires Python 3.6 and all our systems are
running Python 3.6+ anyways.