1
0
mirror of https://github.com/ilri/dspace-statistics-api.git synced 2024-12-23 13:04:39 +01:00
Commit Graph

399 Commits

Author SHA1 Message Date
a35ecf2394
Add Swagger UI on /swagger
This includes a Swagger UI with an OpenAPI 3.0 JSON schema for easy
interactive demonstration and testing of the API. The JSON schema
was created with the standalone swagger-editor. Includes tests to
make sure that the /swagger and /docs/openapi.json paths are acce-
ssible.
2020-12-22 11:18:47 +02:00
3e271c7852
tests/dspacestatistics.sql: Update data
All checks were successful
continuous-integration/drone/push Build is passing
Add a new database snapshot with communities and collections.
2020-12-20 22:31:41 +02:00
d7ba14c590
tests: Add tests for communities and collections
Also, separate tests for items, communities, and collections into
their own files, leaving a single test for docs in its own file.
2020-12-20 22:12:13 +02:00
ab82e90773
dspace_statistics_api/stats.py: Use -isBot:true
All checks were successful
continuous-integration/drone/push Build is passing
Minor change to bot filtering. We should use a negated match for
documents that have `isBot:true` rather than looking for documents
that are tagged with `isBot:false` (the distinction is subtle, but
important).
2020-12-20 16:56:03 +02:00
8a1244d2d0
Update changelog and docs 2020-12-20 16:45:49 +02:00
04f0756c7f
dspace_statistics_api/util.py: Add vim modeline 2020-12-20 16:31:52 +02:00
830e4415f5
dspace_statistics_api/app.py: Run isort 2020-12-20 16:29:35 +02:00
47b4eb3df7
Rename items.py to stats.py
It is no longer used only for item-related statistics functions.
2020-12-20 16:28:56 +02:00
3339bf8d9c
Add communities and collections support to API
The basic logic is similar to items, where you can request single
item statistics with a UUID, all item statistics, and item statis-
tics for a list of items (optionally with a date range). Most of
the item code was re-purposed to work on "elements", which can be
items, communities, or collections depending on the request, with
the use of Falcon's `before` hooks to set the statistics scope so
we know how to behave for the current request.

Other than the minor difference in facet fields, another issue I
had with communities and collections is that the owningComm and
owningColl fields are multi-valued (unlike items' id field). This
means that, when you facet the results of your query, Solr returns
ids that seem unrelated, but are actually present in the field, so
I had to make sure I checked all returned ids to see if they were
in the user's POSTed elements list.

TODO:
  - Add tests
  - Revise docstrings
  - Refactor items.py as it is now generic
2020-12-20 16:14:46 +02:00
fba6f1ead1 CHANGELOG.md: Update unreleased changes
All checks were successful
continuous-integration/drone/push Build is passing
2020-12-18 22:54:01 +02:00
20c8ba0cf8 indexer.py: Add support for communities and collections
The logic to get views and downloads is very similar to that used
for items, but we facet by different fields. This uses a generic
function for indexing that takes an "indexType" and a "facetField"
parameter. The indexType parameter controls which database table
to insert into, and the facetField parameter indicates which field
to facet by in Solr.
2020-12-18 22:53:16 +02:00
b486f51dd7 indexer.py: Rename index functions for items
Start making plans for indexing communities and collections.
2020-12-18 22:53:16 +02:00
787eec20ea
CHANGELOG.md: Add note about imports
All checks were successful
continuous-integration/drone/push Build is passing
2020-12-18 22:52:14 +02:00
9e6fcf279b
dspace_statistics_api/items.py: Format with black 2020-12-18 22:45:39 +02:00
4dbf734a4b
Move all imports to top of file
A few months ago I had an issue setting up mocking because I was
trying to be clever importing these libraries only when I needed
them rather than at the global scope. Someone pointed out to me
that if the imports are at the top of the file Falcon will load
them once when the WSGI server starts, whereas if they are in the
on_get() or on_post() they will load for every request! Also, it
seems that PEP8 recommends keeping imports at the top of the file
anyways, so I will just do that.

Imports sorted with isort.

See: https://www.python.org/dev/peps/pep-0008/#imports
2020-12-18 22:42:06 +02:00
a0d0a47150
items.py: Add fl paramter to Solr queries
I forgot to add the fl parameter here as well.
2020-12-18 16:12:34 +02:00
01e9756cf2
Update requirements
All checks were successful
continuous-integration/drone/push Build is passing
Generated with poetry export:

    $ poetry export -f requirements.txt > requirements.txt
    $ poetry export --dev -f requirements.txt > requirements-dev.txt
2020-12-18 11:20:17 +02:00
b2b4eb2939
poetry.lock: Run poetry update 2020-12-18 11:19:16 +02:00
4bbbaa4af3
dspace_statistics_api/indexer.py: Use fl parameter
All checks were successful
continuous-integration/drone/push Build is passing
I forgot to add the fl parameter to the downloads function.
2020-12-18 10:44:02 +02:00
7e4d5f4b13
README.md: Minor edit to intro 2020-12-18 10:42:48 +02:00
428172854d
README.md: Add TODO
All checks were successful
continuous-integration/drone/push Build is passing
2020-12-17 20:44:25 +02:00
2707cb37d5
CHANGELOG.md: Add note about fl parameter
Some checks failed
continuous-integration/drone/push Build is failing
2020-12-17 12:27:11 +02:00
2407aeec70
dspace_statistics_api/indexer.py: Use fl parameter
When indexing item views and downloads the only field we need is the
the id. The `fl` parameter tells Solr which fields to return in the
search results. This should theoretically be more efficient, though
I don't have any time to figure out how to measure it right now.
2020-12-17 12:25:28 +02:00
f3a0e3a671
CHANGELOG.md: Add note about ORDER BY
All checks were successful
continuous-integration/drone/push Build is passing
2020-12-17 10:17:23 +02:00
4590fc8708
dspace_statistics_api/app.py: Use ORDER BY in /items
Since we are paging through the results by limit/offset we need to
be sure that we are returning results deterministically.
2020-12-17 10:10:40 +02:00
8b924cf450
Remove TravisCI config
All checks were successful
continuous-integration/drone/push Build is passing
I will use other CIs since TravisCI changed their business model.
2020-12-15 09:38:51 +02:00
ea24c73a6a
.drone.yml: Install gcc for Python 3.9
It appears to be needed to compile typed-ast:

    gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Iast27/Include -I/usr/local/include/python3.9 -c ast27/Custom/typed_ast.c -o build/temp.linux-x86_64-3.9/ast27/Custom/typed_ast.o
    error: command 'gcc' failed: No such file or directory
    ----------------------------------------
    ERROR: Failed building wheel for typed-ast
2020-12-14 22:50:21 +02:00
cd98d33615
.drone.yml: Only install requirements-dev.txt
It seems that Poetry's --dev export includes both dev and non-dev
libraries so we don't need to install both.
2020-12-14 22:42:00 +02:00
9d112266ca
Update requirements-dev.txt
Generated with poetry export:

    $ poetry export --dev -f requirements.txt > requirements-dev.txt
2020-12-14 22:05:18 +02:00
2b067050ff
Remove pytest-clarity
It is missing a six dependency which causes the build to fail. I
could simply add six to the virtualenv but it feels dirty. I don't
actually *need* pytest-clarity for anything so I'll just remove it.

See: https://github.com/darrenburns/pytest-clarity/issues/14
2020-12-14 22:05:03 +02:00
dc683f2d1c
pytest.ini: Change --strict to --strict-markers
This is deprecated since pytest 6.2.0.

See: https://docs.pytest.org/en/stable/deprecations.html#the-strict-command-line-option
2020-12-14 19:07:02 +02:00
f60f529bd7
Update requirements
Generated with poetry export:

    $ poetry export -f requirements.txt > requirements.txt
    $ poetry export --dev -f requirements.txt > requirements-dev.txt
2020-12-14 15:41:40 +02:00
7db8458201
poetry.lock: Run poetry update
[SKIP CI]
2020-12-14 15:40:06 +02:00
707f878b94 Add .drone.yml
Uses multiple pipelines to test several versions of Python. A few
things to note:

- I use the -slim Python packages, which are smaller and yet still
have no problem installing psycopg2-binary with pip
- I have to start a PostgreSQL database service for each pipeline
separately
2020-12-14 15:38:50 +02:00
930250352a
Update docs about POST /items 2020-12-13 20:09:20 +02:00
e27f30ba4d
README.md: Use travis-ci.com domain for badge link 2020-12-08 09:11:38 +02:00
28d1917038
README.md: Use travis-ci.com domain for badge 2020-12-08 09:09:19 +02:00
fc6a9c2ad1
tests: Update for real data
Now that CGSpace is running DSpace 6 I will use some real UUIDs to
make things easier in the future.
2020-11-25 14:56:47 +02:00
3125e96a16
Bump version to 1.3.2 2020-11-18 22:01:18 +02:00
66143ff00f
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

The `--without-hashes` is required to work around an issue with
gunicorn pulling in a dependency on setuptools that poetry ignores.

See: https://github.com/python-poetry/poetry/issues/1584
2020-11-18 21:59:33 +02:00
2d15f12be9
poetry.lock: Run poetry update 2020-11-18 21:58:32 +02:00
9218039e61
CHANGELOG.md: Add note about limit param 2020-11-18 21:57:12 +02:00
88a8db6c78
Make sure limit is between 1 and 100
We were not properly checking whether the limit was actually less
than or equal to 100.
2020-11-18 21:55:54 +02:00
3995eba0a7
CHANGELOG.md: Add note about Solr bot filter 2020-11-17 17:42:30 +02:00
810508d038
dspace_statistics_api/indexer.py: Use -isBot:true
Minor change to bot filtering. We should use a negated match for
documents that have `isBot:true` rather than looking for documents
that are tagged with `isBot:false` (the distinction is subtle, but
important).
2020-11-17 17:40:08 +02:00
ecafab57cb
README.md: Update DSpace version note 2020-11-16 16:16:21 +02:00
9c9431b58c
CHANGELOG.md: Add unreleased changes 2020-11-02 22:14:18 +02:00
2d6520fc97
Fix limit in docs 2020-11-02 22:14:08 +02:00
79a393d33f
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

The `--without-hashes` is required to work around an issue with
gunicorn pulling in a dependency on setuptools that poetry ignores.

See: https://github.com/python-poetry/poetry/issues/1584
2020-11-02 22:10:29 +02:00
149f6c418f
poetry.lock: Run poetry update 2020-11-02 22:00:29 +02:00