1
0
mirror of https://github.com/ilri/dspace-statistics-api.git synced 2025-05-10 07:06:01 +02:00

Compare commits

...

188 Commits

Author SHA1 Message Date
914ec52fbb CHANGELOG.md: Move unrelease changes to 0.9.0 2019-01-22 09:02:29 +02:00
5524066656 CHANGELOG.md: Add note about catching errors 2019-01-22 09:01:54 +02:00
043d897cef dspace_statistics_api/indexer.py: Catch case of no views/downloads
Don't fail with an exception when there are no views or downloads,
for example on a new DSpace installation.
2019-01-22 09:00:22 +02:00
bd28353cda README.md: Remove TODO for fixing querying of shards 2019-01-22 08:41:39 +02:00
e23d66c2a2 CHANGELOG.md: Add note about fixing querying of sharded cores 2019-01-22 08:41:31 +02:00
40e284dac0 dspace_statistics_api/indexer.py: Query multiple shards
DSpace's stats-util script splits the Solr statistics core into yearly
shards. We need to use Solr's `shards` query parameter in order to get
the statistics for previous years. This commit adds a helper function
to enumerate the active Solr cores to find yearly shards matching the
statistics-YYYY pattern and add them to the query.
2019-01-22 08:39:36 +02:00
934fa9db9b README.md: Add TODO about sharded statistics cores 2019-01-21 12:55:43 +02:00
1fabb72b58 Update requirements
Generated from pipenv with:

  $ pipenv lock -r > requirements.txt
  $ pipenv lock -r -d > requirements-dev.txt
2019-01-16 12:34:50 +02:00
c7f95f0b60 README.md: Update TODO
I think it might be possible to compute community and collection
statistics from Solr and make them available at new endpoints:

  - /communities
  - /community/id
  - /collections
  - /collection/id
2019-01-16 09:59:29 +02:00
c95a98dd2d Pipfile.lock: update dependencies
Updated with `pipenv update`.
2019-01-15 10:22:46 +02:00
3f70f94a10 Pipfile.lock: Run pipenv update 2018-11-26 11:53:37 +02:00
9b8ad9defd Merge pull request #9 from ilri/pipenv-update
Pipenv update
2018-11-19 23:50:44 +02:00
d69ab20220 CHANGELOG.md: pytest version 4.0.0 2018-11-19 23:46:03 +02:00
378f56ddc2 Pipfile.lock: Run pipenv update 2018-11-19 23:34:34 +02:00
5a2a7d684c CHANGELOG.md: Move unreleased changes to version 0.8.1 2018-11-14 09:37:00 +02:00
18276e910f CHANGELOG.md: Add notes about pipenv 2018-11-14 09:36:13 +02:00
8de8c2765f Merge pull request #8 from ilri/update-dependencies
Update dependencies
2018-11-14 09:34:45 +02:00
11a1755e59 Update requirements.txt
Generated from pipenv with:

    $ pipenv lock -r > requirements.txt
2018-11-14 09:19:47 +02:00
a835b0fdc5 Re-create pipenv environment from scratch
When I originally created the pipenv environment I used the standard
pip requirements.txt that I already had, which captured all the mod-
ules and their exact versions at the time. This makes it hard to se-
parate the project's actual dependencies from the dependencies' dep-
endencies, complicating the Pipfile and making it hard to update mo-
dule versions later.

I've re-created the environment with the following commands:

    $ pipenv install gunicorn falcon psycopg2-binary git+https://github.com/alanorth/SolrClient.git@kazoo-2.5.0#egg=SolrClient
    $ pipenv install --dev ipython flake8 pytest
2018-11-14 09:07:32 +02:00
a88600c92b README.md: Add note about GPLv3 2018-11-13 12:34:31 +02:00
019d9242c9 Merge pull request #7 from ilri/use-pip
Rework to use pip instead of pipenv
2018-11-12 09:17:16 +02:00
f4d7312a3f CHANGELOG.md: Add unreleased changes 2018-11-12 09:02:04 +02:00
9c46cfc7e2 Use Python 3.7 for pipenv
Now that I'm only using pipenv locally it shouldn't create problems
for people. They can still just create a vanilla virtualenv and use
pip to install the dependencies.
2018-11-12 08:54:54 +02:00
c1c2e319ac README.md: Rework to use pip instead of pipenv
Pipenv is great for local development, but I don't think many people
are using it yet. I can use it locally and on Travis, but still keep
vanilla requirements.txt for use with pip. The requirements.txt file
can be generated easily from pipenv itself:

    $ pipenv lock -r > requirements.txt

The same for the development requirements:

    $ pipenv lock -r -d > requirements-dev.txt
2018-11-12 08:49:02 +02:00
0895b4f469 Add requirements-dev.txt for pip
Generated with pipenv lock -r -d. Will be used for separating the
development dependencies.
2018-11-12 08:48:45 +02:00
dcfef06a65 Pipfile.lock: Run pipenv update 2018-11-12 08:20:47 +02:00
13736d6359 CHANGELOG.md: Move unreleased changes to verison 0.8.0 2018-11-11 17:16:26 +02:00
4fc64edeb8 Merge pull request #6 from ilri/pytest
Pytest
2018-11-11 17:14:49 +02:00
2a8901dc4f CHANGELOG.md: Update notes 2018-11-11 17:10:45 +02:00
e25c974796 README.md: We have tests now 2018-11-11 17:08:51 +02:00
ffc62e9ee6 tests/test_api.py: Use response.text for all json.loads()
This allows the code to work in Python 3.5 as well as 3.6+.
2018-11-11 17:05:31 +02:00
556c5ae088 tests/test_api.py: Use response.text instead of content
Falcon's response content is raw bytes, while its text is a string.
Let's use the latter so we can use json.loads() in Python 3.5, 3.6,
and 3.7 with the same code.

See: https://falcon.readthedocs.io/en/stable/api/testing.html
2018-11-11 17:01:17 +02:00
d94134f80a tests/test_api.py: Try to add workaround for Python 3.5
In Python 3.5 it seems that json.loads() cannot decode a bytes, but
it works in Python 3.6 and 3.7. Let's try a workaround to see if we
can get it working on both Python 3.5 and 3.6+.

See: https://docs.python.org/3.5/library/json.html#json.loads
See: https://docs.python.org/3.6/library/json.html#json.loads
2018-11-11 17:00:20 +02:00
586231eb2d .travis.yml: Use PostgreSQL 9.5
Default PostgreSQL in Travis CI is 9.2 which is very old, so let's
try to use 9.5.

See: https://docs.travis-ci.com/user/database-setup/#postgresql
2018-11-11 16:41:35 +02:00
766b77a3b6 .travis.yml: Use PostgreSQL directly
It seems that Travis CI already has a PostgreSQL service running.
2018-11-11 16:35:28 +02:00
1959e8154e .travis.yml: Use localhost for Docker's PostgreSQL ports
See: https://docs.travis-ci.com/user/docker/
2018-11-11 16:28:18 +02:00
d40b2f0b2e Test API using pytest and PostgreSQL on Travis
First attempt at getting the Travis Docker setup correct. Inspired
by the Travis pipenv setup used in Responder.

See: https://docs.travis-ci.com/user/docker/
See: https://github.com/kennethreitz/responder/blob/master/.travis.yml
2018-11-11 16:25:16 +02:00
061d0a8f5f CHANGELOG.md: Add API tests to unreleased changes 2018-11-11 16:24:54 +02:00
e57660ff88 Add initial pytest configuration
From: https://github.com/kennethreitz/responder/blob/master/pytest.ini
2018-11-11 16:24:54 +02:00
5c8756bede Add pytest to pipenv development packages 2018-11-11 16:24:54 +02:00
bae9fb80e4 Add initial API tests
Test the basic assumptions of the API like response codes and types.
2018-11-11 16:24:54 +02:00
8a65d99e08 .travis.yml: Don't limit builds to master
This is good in theory but it means we can't trigger builds for other
branches on the fly from the Travis web interface.
2018-11-11 16:21:48 +02:00
d479b7dc6c CHANGELOG.md: Syntax fixes 2018-11-11 00:08:44 +02:00
40aac8bf89 Merge pull request #5 from ilri/database-error-handling
Database error handling
2018-11-11 00:07:25 +02:00
53ba6f2936 CHANGELOG.md: Add database try/except to unreleased changes 2018-11-11 00:05:49 +02:00
140cc4cb07 README.md: Remove TODO for database try/except
Now database connection errors are properly excepted and raised.
2018-11-11 00:04:28 +02:00
d5d2d2149b dspace_statistics_api/database.py: Raise HTTP 500 on error
Properly except on database connection error and raise an HTTP 500
instead of spamming the console/log with twenty lines of text.
2018-11-10 23:58:58 +02:00
4c51d12eb4 CHANGELOG.md: Move unreleased changes to version 0.7.0 2018-11-07 17:55:01 +02:00
a6ce44e852 Merge pull request #4 from ilri/database-refactor
Database refactor
2018-11-07 17:54:04 +02:00
f6e866a589 dspace_statistics_api/indexer.py: Remove debug code 2018-11-07 17:51:24 +02:00
eb5c187d41 CHANGELOG.md: Add note about database re-factor 2018-11-07 17:50:46 +02:00
b06c82bb16 README.md: Remove TODO about closing database connection
Now I'm using a database manager class with Python's "with" context
blocks to automatically and concisely open and close connections.
2018-11-07 17:47:59 +02:00
2f342be948 Refactor database code to use a context manager
Instead of opening one global persistent database connection when
the application I am now abstracting it to a class that I can use
in combination with Python's "with" context. Both connections and
cursors are kept for the context of each "with" block and closed
automatically when exiting.

See: https://alysivji.github.io/managing-resources-with-context-managers-pythonic.html
See: http://initd.org/psycopg/docs/connection.html#connection.close
2018-11-07 17:41:21 +02:00
e39f2b260c Merge pull request #3 from ilri/ipython
Add ipython to pipenv dev packages
2018-11-07 17:09:09 +02:00
60ad474b88 Add ipython to pipenv dev packages
This is very useful for debugging Python code interactively.
2018-11-07 17:07:14 +02:00
888f85d19e README.md: Adjust installation for pipenv
It's nicer to manager module versions using pipenv, and I can still
generate a requirements.txt for deploying the exact versions on the
production server.
2018-11-04 16:07:27 +02:00
df7de93964 CHANGELOG.md: Add pipenv to unreleased changes 2018-11-04 15:59:11 +02:00
7218631cc4 requirements.txt: Regenerate
Created from pipenv with the following command:

    $ pipenv lock -r > requirements.txt
2018-11-04 15:58:31 +02:00
085e525b2f Regenerate pipenv
Uses 'kazoo-2.5.0' branch name for installing SolrClient instead of
the commit hash and adds flake8 as a dev package. This means that I
can track dependencies for production and development and still end
up with a requirements.txt for produciton.
2018-11-04 15:55:28 +02:00
e1580df12f requirements.txt: Update packages
Generated from pipenv with:

    $ pipenv run pip freeze > requirements.txt
2018-11-04 15:39:09 +02:00
be18779ff9 Add flake8 to pipenv 2018-11-04 15:38:51 +02:00
60cfd8f23b Add Pipfile for pipenv
Eventually I'd like to be able to use pipenv instead of plain pip.
For now I'll just keep using pipenv and generating requirements.txt
like this:

    $ pipenv run pip freeze > requirements.txt

Then I can kinda have the best of both worlds, where I use pipenv
on my local machine and pip with requirements.txt on the server.
2018-11-04 15:35:47 +02:00
87fd117d77 .hound.yml: Set pull requests to failed if build fails 2018-11-04 00:53:37 +02:00
f262ebdca2 CHANGELOG.md: Add unreleased changes 2018-11-04 00:50:46 +02:00
64d7f1a3b2 Enable Flake8 validation in Hound CI
Will check all pull requests in the project to make sure they don't
violate PEP 8 style (except the E501 for long lines because I think
it makes code hard to read).
2018-11-04 00:48:06 +02:00
a238a727d2 CHANGELOG.md: Add unreleased changes 2018-11-04 00:05:16 +02:00
cc5ce3ab98 Correct issues highlighted by Flake8
Flake8 validates code style against PEP 8 in order to encourage the
writing of idiomatic Python. For reference, I am currently ignoring
errors about line length (E501) because I feel it makes code harder
to read.

This is the invocation I am using:

    $ flake8 --ignore E501 dspace_statistics_api
2018-11-04 00:04:27 +02:00
70dfcb93c5 dspace_statistics_api/database.py: Don't quote host in connect() 2018-11-03 22:43:05 +02:00
69bcd1b5e4 CHANGELOG.md: Add unreleased changes 2018-11-03 22:42:08 +02:00
5f3bd61998 Allow configuration of PostgreSQL port
Defaults to port 5432, but can be overridden with DATABASE_PORT.
2018-11-03 22:40:45 +02:00
e54dd8888f README.md: Update requirements 2018-11-01 16:31:36 +02:00
2ba09f8693 README.md: Improve introduction
Obsessed with the text presentation and line length in GitHub!
2018-11-01 16:28:01 +02:00
a468a87a5a README.md: Improve introduction 2018-11-01 15:58:12 +02:00
6a30b6550d README.md: Improve introduction 2018-11-01 15:45:53 +02:00
18f013bfa0 README.md: Add Falcon to introduction 2018-11-01 10:24:37 +02:00
78900b5d85 CHANGELOG.md: Add changes for v0.6.1 2018-11-01 00:39:12 +02:00
eb08832bf8 Sync API documentation HTML with README.md 2018-11-01 00:37:52 +02:00
c2ec780ad9 README.md: Improve API documentation 2018-11-01 00:37:40 +02:00
df8ebc8bf1 README.md: Improve API endpoint documentation 2018-11-01 00:31:16 +02:00
0d4be5f4c8 README.md: Add API documentation endpoint 2018-11-01 00:22:16 +02:00
30dc7f1939 Add basic API documentation on root (/)
I had imagined plugging in an interactive Swagger or OpenAPI instance
here, but that's actually much more involved in Falcon than I want to
deal with right now.
2018-11-01 00:19:39 +02:00
77194707fd README.md: Improve introduction 2018-11-01 00:08:24 +02:00
10c1f8bdcc README.md: Update Travis CI badge 2018-10-31 23:14:38 +02:00
da74943da2 README.md: Update introduction 2018-10-31 22:40:36 +02:00
fc8348ab29 README.md: Add acknoledgement about the Solr queries 2018-10-31 19:36:50 +02:00
15c3299b99 CHANGELOG.md: Add changes for v0.6.0 2018-10-31 19:26:45 +02:00
d36be5ee50 contrib: Update systemd unit files for refactor 2018-10-28 11:14:21 +02:00
2f45d27554 dspace_statistics_api/app.py: remove unused code
This was added accidentally when I refactored. I was trying to see
if I could use Falcon's on_exit() hook.
2018-10-28 11:14:21 +02:00
b8356f7a87 Add "application" alias to API object
By default gunicorn looks for an "application" object to run, so this
saves us having to type api:app.
2018-10-28 11:14:21 +02:00
2136dc79ce Remove shebang from indexer.py
This is run as a Python module now so does not need a shebang.
2018-10-28 11:14:21 +02:00
ed60120cef Remove executable bit from indexer.py
Now it is run as a Python module.
2018-10-28 11:14:21 +02:00
c027f01b48 Refactor project structure
This follows guidance from several well-known Python best practices
guides. Basically, the idea is create a package for the application
that is comprised of several re-usable modules.

See: https://docs.python-guide.org/writing/structure/
See: https://realpython.com/python-application-layouts/
2018-10-28 11:14:21 +02:00
754663f062 CHANGELOG.md: Add changes for version 0.5.2 2018-10-28 11:12:27 +02:00
507699e58a requirements.txt: Update libraries
Switch to a personal fork of SolrClient so that we can use kazoo 2.5.0
and get rid of the error about the 'async' keyword on Python 3.7. Also
this bumps some of the other libraries to their latest versions.
2018-10-28 11:09:47 +02:00
a016916995 CHANGELOD.md: Add note about ujson 2018-10-24 14:15:03 +03:00
6fd2827a7c Use Python's native json instead of ujson
Falcon can optionally use ujson to speed up JSON (de)serialization,
but Falcon's already really fast and requiring ujson actually makes
deployment trickier in some cases (for example in Docker containers
that are based on Alpine Linux).

Here are some tests of Falcon 1.4.1 on Python 3.5 from my laptop:

    1. falcon...............60172 req/sec or 16.62 μs/req (36x)
    2. falcon-ext...........34186 req/sec or 29.25 μs/req (20x)
    3. bottle...............32924 req/sec or 30.37 μs/req (20x)
    4. werkzeug.............11948 req/sec or 83.70 μs/req (7x)
    5. flask.................6654 req/sec or 150.30 μs/req (4x)
    6. django................4565 req/sec or 219.04 μs/req (3x)
    7. pecan.................1672 req/sec or 598.19 μs/req (1x)

The tests were conducted with Falcon's official Docker benchmarking
tools on my Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz on Arch Linux.

See: https://github.com/falconry/falcon/tree/master/docker
2018-10-24 14:08:23 +03:00
62142eb79e CHANGELOG.md: Move unreleased changes to v0.5.0 2018-10-24 12:02:42 +03:00
fda0321942 CHANGELOG.md: Add note about Solr in API component 2018-10-24 12:01:47 +03:00
963aa245c8 app.py: Don't initialize Solr connection
We only need Solr in the indexing component, not for the API itself.
2018-10-24 11:59:50 +03:00
568ff2eebb CHANGELOG.md: Add note about nginx configuration 2018-10-23 14:56:44 +03:00
deecb8a10b README.md: Add example nginx configuration 2018-10-23 14:55:36 +03:00
12f45d7c08 contrib: Adjust example path 2018-10-23 14:34:29 +03:00
f65089f9ce CHANGELOG.md: Update and move to 0.4.3 release 2018-10-17 09:51:44 +03:00
1db5cf1c29 README.md: Grammar 2018-10-17 09:51:35 +03:00
e581c4b1aa README.md: Improve documentation 2018-10-17 09:50:30 +03:00
e8d356c9ca README.md: Add TODO about Python 3.6+ f-string syntax
They are faster.
2018-10-17 09:13:25 +03:00
34a9b8d629 CHANGELOG.md: Add unreleased changes for Travis CI 2018-10-14 19:02:09 +03:00
41e3d66a0e .travis.yml: Only build master branch 2018-10-14 19:00:31 +03:00
9b2a6137b4 README.md: Add Travis CI badge
For now this is only an indicator that the Python requirements can
be satisfied and installed.
2018-10-14 18:58:12 +03:00
600b986f99 .travis.yml: Use Python 3.7-dev instead of 3.7
I don't think Travis supports Python 3.7 yet because the builds for
that version keep failing.
2018-10-14 18:57:30 +03:00
49a7790794 .travis.yml: Move script to one line 2018-10-14 18:53:45 +03:00
f2deba627c .travis.yml: Run pip install as script
Basically for now there are no tests so I just want to just check
that requirements.txt is correct and that all dependencies can be
installed.
2018-10-14 18:47:14 +03:00
9323513794 README.md: Update instructions 2018-10-14 18:45:40 +03:00
daf15610f2 CHANGELOG.md: Update changes and move to 0.4.2 2018-10-05 00:19:18 +03:00
4ede966dbb indexer.py: Fix logic error in SQL insert
This was inserting correctly on the first run, but subsequent runs
were inserting into the incorrect column on conflict. This made it
seem like there were downloads for items where there were none.
2018-10-05 00:16:24 +03:00
3580473a6d README.md: Add TODO about JSON in PostgreSQL 2018-10-03 20:08:18 +03:00
071c24535f README.md: Add TODO about API versions 2018-10-03 11:12:18 +03:00
4291aecac4 README.md: Formatting 2018-09-27 12:45:15 +03:00
46bf537e88 CHANGELOG.md: Add note about cursor change 2018-09-27 11:08:42 +03:00
eaca5354d3 app.py: Iterate directly on cursor
We don't need to create an intermediate variable for the results of
the SQL query because psycopg2's cursor is iterable.

See: http://initd.org/psycopg/docs/cursor.html
2018-09-27 11:03:44 +03:00
4600288ee4 CHANGELOG.md: Add note about ujson 2018-09-27 09:53:42 +03:00
8179563378 requirements.txt: pip freeze 2018-09-27 09:53:16 +03:00
b14c3eef4d indexer.py: Use ujson instead of json
Falcon optionally makes use of the ujson library to speed up media
(de)serialization, error serialization, and query string parsing.

See: https://falcon.readthedocs.io/en/stable/user/install.html
2018-09-27 09:51:40 +03:00
71a789b13f CHANGELOG.md. Add unreleased changes 2018-09-27 09:30:48 +03:00
c68ddacaa4 README.md: Add note about systemd units for deployment 2018-09-27 09:26:47 +03:00
9c9e79769e README.md: Add TODO 2018-09-27 09:17:45 +03:00
2ad5ade556 README.md: Improve introduction 2018-09-27 09:12:52 +03:00
7412a09670 README.md: Improve introduction 2018-09-27 09:07:28 +03:00
bb744a00b8 README.md: Add requirements 2018-09-27 08:57:27 +03:00
7499b89d99 CHANGELOG.md: Move unreleased changes to v0.4.1 2018-09-27 08:15:54 +03:00
2c1e4952b1 indexer.py: Remove comment
I had left this there so I could remember how to get the number of
facets, but I don't need it anymore.
2018-09-26 23:27:48 +03:00
379f202c3f CHANGELOG.md: Add unreleased changes 2018-09-26 23:26:48 +03:00
560fa6056d README.md: Remove batch inserts from TODO 2018-09-26 23:25:35 +03:00
385a34e5d0 indexer.py: Use psycopg2's execute_values to batch inserts
Batch inserts are much faster than a series of individual inserts
because they drastically reduce the overhead caused by round-trip
communication with the server. My tests in development confirm:

  - cursor.execute(): 19 seconds
  - execute_values(): 14 seconds

I'm currently only working with 4,500 rows, but I will experiment
with larger data sets, as well as larger batches. For example, on
the PostgreSQL mailing list a user reports doing 10,000 rows with
a page size of 100.

See: http://initd.org/psycopg/docs/extras.html#psycopg2.extras.execute_values
See: https://github.com/psycopg/psycopg2/issues/491#issuecomment-276551038
2018-09-26 23:10:29 +03:00
d0ea62d2bd database.py: Use one line for psycopg2 imports 2018-09-26 22:23:24 +03:00
366ae25b8e README.md: Add link to psycopg2 issue about batch inserts 2018-09-26 22:23:08 +03:00
0f3054ae03 README.md: Add TODO about batch DB inserts 2018-09-26 16:31:13 +03:00
6bf34235d4 CHANGELOG.md: Move unreleased changes to version 0.4.0 2018-09-26 02:51:27 +03:00
e604d8ca81 indexer.py: Major refactor
Basically Solr's numFound has nothing to do with the actual number
of distinct facets that are returned. You need to use Solr's stats
component to get the number of distinct facets, aka countDistinct.
This is apparently deprecated in newer Solr versions, but we're on
version 4.10 and it works there.

Also, I realized that there is no need to return facets for items
without any views or downloads. Using facet.mincount=1 reduces the
result set size and also means we can store less data in the data-
base. The API returns HTTP 404 Not Found if an item is not in the
database anyways.

I can't figure it out exactly, but there is some weird issue with
Solr's facet results when you don't use facet.mincount=1. For some
reason you get tons of results with an id that doesn't even exist
in the document database, let alone as an actual DSpace item!

See: https://lucene.apache.org/solr/guide/6_6/the-stats-component.html
2018-09-26 02:41:10 +03:00
fc35b816f3 CHANGELOG.md: Add unreleased changes 2018-09-25 23:09:44 +03:00
9e6a2f7559 contrib/dspace-statistics-indexer.timer: Fix syntax
You can test OnCalendar strings using systemd-analyze calendar, eg:

    # systemd-analyze calendar '*-*-* 06:00:00,18:00:00'
    Failed to parse calendar specification '*-*-* 06:00:00,18:00:00':
    Invalid argument
    # systemd-analyze calendar '*-*-* 06,18:00:00'
    Normalized form: *-*-* 06,18:00:00
        Next elapse: Wed 2018-09-26 06:00:00 EEST
           (in UTC): Wed 2018-09-26 03:00:00 UTC
           From now: 6h left
2018-09-25 23:07:03 +03:00
46cfc3ffbc CHANGELOG.md: Release version 0.3.2 2018-09-25 13:14:08 +03:00
2850035a4c Return HTTP 404 when an item id is not found 2018-09-25 13:12:53 +03:00
c0b550109a README.md: Improve wording 2018-09-25 12:24:52 +03:00
bfceffd84d indexer.py: Improve inline documentation 2018-09-25 12:23:31 +03:00
d0552f5047 CHANGELOG.md: Move unreleased changes to version 0.3.1 2018-09-25 12:18:26 +03:00
c3a0bf7f44 CHANGELOG.md: Add Python 3.7 to Travis CI config 2018-09-25 12:17:49 +03:00
6e47e9c9ee .travis.yml: Add Python 3.7 2018-09-25 12:17:20 +03:00
cd90d618d6 CHANGELOG.md: Fix error in old release 2018-09-25 12:17:01 +03:00
280d211d56 CHANGELOG.md: Add note about kazoo 2.5.0 2018-09-25 12:12:10 +03:00
806d63137f requirements.txt: Use kazoo 2.5.0
SolrClient 0.2.1 currently depends on kazoo 2.2.1, but there is an
issue with Python 3.7 in kazoo <= 2.5.0. Kazoo 2.5.0 fixes the is-
sue with Python 3.7, and for my limited usage of SolrClient it se-
ems to work fine.

See: https://github.com/moonlitesolutions/SolrClient/issues/79
2018-09-25 12:08:28 +03:00
f7c7390e4f README.md: Add note about Python 3.7 2018-09-25 12:07:58 +03:00
702724e8a4 CHANGELOG.md: Move unreleased changes to version 0.3.0 2018-09-25 11:38:36 +03:00
36818d03ef CHANGELOG.md: Update unreleased changes 2018-09-25 11:37:56 +03:00
4cf8656b35 Change / route to /items
I think it's more obvious if the "all items" route is plural. Also,
this will allow me to eventually put documentation at the root.
2018-09-25 11:34:07 +03:00
f30a464cd1 README.md: Add notes about API endpoints 2018-09-25 11:28:12 +03:00
93ae12e313 README.md: Update introduction 2018-09-25 11:15:12 +03:00
dc978e9333 CHANGELOG.md: Add note about requirements.txt and Travis CI 2018-09-25 11:09:02 +03:00
295436fea0 Add .travis.yml 2018-09-25 11:08:01 +03:00
46a1476ab0 Add requirements.txt
Generated with `pip freeze`. This is so I can pin the versions of
packages that I've tested with as well as to allow Travis to test
whether the project runs on various Pythons and to let GitHub in-
form me of vulnerabilities in some libraries.
2018-09-25 11:02:50 +03:00
87dbb6c4df CHANGELOG.md: Release version 0.2.1 2018-09-25 02:21:44 +03:00
3160c44566 app.py: Remove comment
This comment was added when I first began the application and the
testing status is documented in the README now.
2018-09-25 02:20:51 +03:00
4b72f626d9 Update string substitution format
Instead of doing numbered strings I will just depend on the order,
at least to be consistent.
2018-09-25 02:19:29 +03:00
2d3b7620e3 CHANGELOG.md: Add note about psycopg2.extras.DictCursor 2018-09-25 02:08:54 +03:00
6e4bc630f7 database.py: Use psycopg2.extras.DictCursor
This allows us to access records using their column name. I didn't
notice that this was not working, as I had been testing the wrong
server!

See: http://initd.org/psycopg/docs/extras.html
2018-09-25 02:06:29 +03:00
44884140e5 CHANGELOG.md: Add new unreleased changes 2018-09-25 01:11:37 +03:00
74ff86ee3b contrib: Update environment settings in system units 2018-09-25 01:10:14 +03:00
3327884f21 Update docs to remove SQLite stuff
I've decided to use PostgreSQL instead of SQLite because the UPSERT
support is available in versions of PostgreSQL we're alread running,
whereas SQLite needs a VERY new (3.24.0) version that is not avail-
able on any recent long-term support Ubuntu releases.
2018-09-25 00:56:01 +03:00
8f7450f67a Use PostgreSQL instead of SQLite
I was very surprised how easy and fast and robust SQLite was, but in
the end I realized that its UPSERT support only came in version 3.24
and both Ubuntu 16.04 and 18.04 have older versions than that! I did
manage to install libsqlite3-0 from Ubuntu 18.04 cosmic on my xenial
host, but that feels dirty.

PostgreSQL has support for UPSERT since 9.5, not to mention the same
nice LIMIT and OFFSET clauses.
2018-09-25 00:49:47 +03:00
28d61fb041 README.md: Add notes about Python and SQLite versions 2018-09-24 17:26:48 +03:00
cbc98991b4 CHANGELOG.md: Move unreleased notes to version 0.1.0 2018-09-24 16:14:14 +03:00
6c28be0463 README.md: Add note about route for all items 2018-09-24 16:13:26 +03:00
42e8f17305 CHANGELOG.md: Add note about route for all items 2018-09-24 16:13:05 +03:00
19a45f3f6f app.py: Add route to page through all item statistics
This route exposes all item statistics and uses the limit and offset
parameters to control paging throug the result set. The logic here
is extremely easy thanks to the brilliant LIMIT and OFFSET features
of SQLite (of course the SQL query sorts the results by some unique
field to ensure the order is already the same).
2018-09-24 16:07:26 +03:00
505ef31101 CHANGELOG.md: Add note about UPSERT 2018-09-24 14:31:05 +03:00
1543cacc54 app.py: Update SQL logic to use single table
The indexer.py script was updated to use a single table because I
learned about UPSERT. This simplifies the database schema and the
Python logic, and makes it easier to page all views and downloads
at once without complicated JOIN queries.
2018-09-24 14:28:00 +03:00
2cab456f16 indexer.py: Use single items table with UPSERT
I was using two separate tables for item views and downloads without
realizing that SQLite didn't support FULL OUTER JOIN, which would be
needed to get views and downloads for a given item in a single query.

Instead I can use one table with a default value of 0 for both views
and downloads, and then use "UPSERT" to populate the statistics. This
is a newish SQL concept that allows you to attempt an INSERT and then
specify an action to perform in case of conflict. This works well in
SQLite and actually simplifies my Python logic greatly!

Note that the "excluded" table qualifier is a special keyword that
allows you to reference the value that would have been inserted.

See: https://www.sqlite.org/lang_UPSERT.html
2018-09-24 14:19:50 +03:00
53615dea2d indexer.py: Add license and documentation 2018-09-24 09:18:50 +03:00
2d8d1e6833 README.md: Add TODO for nonexistent items 2018-09-24 00:48:02 +03:00
e26e595ea1 README.md: Add more TODOs 2018-09-24 00:35:00 +03:00
a9151b5bbf CHANGELOG.md: Update unreleased notes 2018-09-24 00:30:58 +03:00
76833d6f5f contrib: Update some old CGSpace references to DSpace 2018-09-24 00:30:26 +03:00
a51422273c Remove SOLR_CORE configuration variable
This parameter is not customizable. All DSpace instances use this
name for the Solr statistics core.
2018-09-24 00:20:54 +03:00
89621af85d Split database access into RW and RO
The indexer need to be able to write to the database, but the API only
needs to read it.
2018-09-24 00:00:05 +03:00
c554404d7f CHANGELOG.md: Add systemd units for indexer 2018-09-23 23:15:27 +03:00
90d7a452bd contrib: Add systemd units for indexer
An example systemd service unit for the indexer and an accompanying
timer unit.
2018-09-23 23:13:43 +03:00
431a1c9d64 CHANGELOG.md: Add unreleased changes 2018-09-23 23:04:01 +03:00
e1b9d1284f Rename project to DSpace Statistics API
At first I called it "CGSpace" because I was making it specifically
for our CGSpace DSpace repository, but the potential here is bigger
than that!
2018-09-23 23:02:21 +03:00
29 changed files with 78274 additions and 201 deletions

2
.flake8 Normal file
View File

@ -0,0 +1,2 @@
[flake8]
ignore = E501

1
.gitignore vendored
View File

@ -1,3 +1,2 @@
__pycache__
venv
*.db

4
.hound.yml Normal file
View File

@ -0,0 +1,4 @@
flake8:
enabled: true
config_file: .flake8
fail_on_violations: true

19
.travis.yml Normal file
View File

@ -0,0 +1,19 @@
language: python
python:
- "3.5"
- "3.6"
- "3.7-dev"
addons:
postgresql: "9.5"
before_script:
- psql --version
- createuser -U postgres dspacestatistics
- psql -U postgres -c "ALTER USER dspacestatistics WITH PASSWORD 'dspacestatistics'"
- createdb -U postgres -O dspacestatistics --encoding=UNICODE dspacestatistics
- psql -U postgres -d dspacestatistics < tests/dspacestatistics.sql
install:
- "pip install pipenv --upgrade-strategy=only-if-needed"
- "pipenv install --dev"
script: pytest
# vim: ts=2 sw=2 et

View File

@ -4,6 +4,128 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.9.0] - 2019-01-22
### Updated
- pytest version 4.0.0
- Fix indexing of sharded statistics cores ([#10))
- Handle case of missing views/downloads gracefully
## [0.8.1] - 2018-11-14
### Changed
- README.md to recommend using vanilla Python virtual environments and pip instead of pipenv
- Regenerate pipenv environment to capture only direct dependencies
### Added
- `requirements-dev.txt` for installing development packages with pip
## [0.8.0] - 2018-11-11
### Changed
- Properly handle database connection errors
### Added
- API tests with pytest
## [0.7.0] - 2018-11-07
### Added
- Ability to configure PostgreSQL database port with DATABASE_PORT environment variable (defaults to 5432)
- Hound CI configuration to validate pull requests against PEP 8 code style with Flake8
- Configuration for [pipenv](https://pipenv.readthedocs.io/en/latest/)
### Changed
- Use a database management class with Python context management to automatically open/close connections and cursors
### Changed
- Validate code against PEP 8 style guide with Flake8
## [0.6.1] - 2018-10-31
### Added
- API documentation at root path (/)
## [0.6.0] - 2018-10-31
### Changed
- Refactor project structure (note breaking changes to API and indexing invocation, see contrib and README.md)
## [0.5.2] - 2018-10-28
### Changed
- Update library versions in requirements.txt
## [0.5.1] - 2018-10-24
### Changed
- Use Python's native json instead of ujson
## [0.5.0] - 2018-10-24
### Added
- Example nginx configuration to README.md
### Changed
- Don't initialize Solr connection in API
## [0.4.3] - 2018-10-17
### Changed
- Use pip install as script for Travis CI
### Improved
- Documentation for deployment and testing
## [0.4.2] - 2018-10-04
### Changed
- README.md introduction and requirements
- Use ujson instead of json
- Iterate directly on SQL cursor in `/items` route
### Fixed
- Logic error in SQL for item views
## [0.4.1] - 2018-09-26
### Changed
- Use `execute_values()` to batch insert records to PostgreSQL
## [0.4.0] - 2018-09-25
### Fixed
- Invalid OnCalendar syntax in dspace-statistics-indexer.timer
- Major logic error in indexer.py
## [0.3.2] - 2018-09-25
## Changed
- /item/id route now returns HTTP 404 if an item is not found
## [0.3.1] - 2018-09-25
### Changed
- Force SolrClient's kazoo dependency to version 2.5.0 to work with Python 3.7
- Add Python 3.7 to Travis CI configuration
## [0.3.0] - 2018-09-25
### Added
- requirements.txt for pip
- Travis CI build configuration for Python 3.5 and 3.6
- Documentation on using the API
### Changed
- The "all items" route from / to /items
## [0.2.1] - 2018-09-24
### Changed
- Environment settings in example systemd unit files
- Use psycopg2.extras.DictCursor for PostgreSQL connection
## [0.2.0] - 2018-09-24
### Changed
- Use PostgreSQL instead of SQLite because UPSERT support needs a very new libsqlite3 whereas it's already in PostgreSQL 9.5+
## [0.1.0] - 2018-09-24
### Changed
- Rename project to "DSpace Statistics API"
- Use read-only database connection in API
- Update systemd units for CGSpace→DSpace rename
- Use UPSERT to simplify database schema and Python logic
### Added
- Example systemd service and timer unit for indexer service
- Add top-level route to expose all item statistics
### Removed
- Ability to customize SOLR_CORE variable
## [0.0.4] - 2018-09-23
### Added
- Added example systemd unit file for API

18
Pipfile Normal file
View File

@ -0,0 +1,18 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
gunicorn = "*"
falcon = "*"
"psycopg2-binary" = "*"
solrclient = {ref = "kazoo-2.5.0", git = "https://github.com/alanorth/SolrClient.git"}
[dev-packages]
ipython = "*"
"flake8" = "*"
pytest = "*"
[requires]
python_version = "3.7"

266
Pipfile.lock generated Normal file
View File

@ -0,0 +1,266 @@
{
"_meta": {
"hash": {
"sha256": "a846fdab4de5765a7e7fc19424a97a6196248e29f87285cf81fd76e8e9ae3e28"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.7"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.org/simple",
"verify_ssl": true
}
]
},
"default": {
"falcon": {
"hashes": [
"sha256:0a66b33458fab9c1e400a9be1a68056abda178eb02a8cb4b8f795e9df20b053b",
"sha256:3981f609c0358a9fcdb25b0e7fab3d9e23019356fb429c635ce4133135ae1bc4"
],
"index": "pypi",
"version": "==1.4.1"
},
"gunicorn": {
"hashes": [
"sha256:aa8e0b40b4157b36a5df5e599f45c9c76d6af43845ba3b3b0efe2c70473c2471",
"sha256:fa2662097c66f920f53f70621c6c58ca4a3c4d3434205e608e121b5b3b71f4f3"
],
"index": "pypi",
"version": "==19.9.0"
},
"psycopg2-binary": {
"hashes": [
"sha256:036bcb198a7cc4ce0fe43344f8c2c9a8155aefa411633f426c8c6ed58a6c0426",
"sha256:1d770fcc02cdf628aebac7404d56b28a7e9ebec8cfc0e63260bd54d6edfa16d4",
"sha256:1fdc6f369dcf229de6c873522d54336af598b9470ccd5300e2f58ee506f5ca13",
"sha256:21f9ddc0ff6e07f7d7b6b484eb9da2c03bc9931dd13e36796b111d631f7135a3",
"sha256:247873cda726f7956f745a3e03158b00de79c4abea8776dc2f611d5ba368d72d",
"sha256:3aa31c42f29f1da6f4fd41433ad15052d5ff045f2214002e027a321f79d64e2c",
"sha256:475f694f87dbc619010b26de7d0fc575a4accf503f2200885cc21f526bffe2ad",
"sha256:4b5e332a24bf6e2fda1f51ca2a57ae1083352293a08eeea1fa1112dc7dd542d1",
"sha256:570d521660574aca40be7b4d532dfb6f156aad7b16b5ed62d1534f64f1ef72d8",
"sha256:59072de7def0690dd13112d2bdb453e20570a97297070f876fbbb7cbc1c26b05",
"sha256:5f0b658989e918ef187f8a08db0420528126f2c7da182a7b9f8bf7f85144d4e4",
"sha256:649199c84a966917d86cdc2046e03d536763576c0b2a756059ae0b3a9656bc20",
"sha256:6645fc9b4705ae8fbf1ef7674f416f89ae1559deec810f6dd15197dfa52893da",
"sha256:6872dd54d4e398d781efe8fe2e2d7eafe4450d61b5c4898aced7610109a6df75",
"sha256:6ce34fbc251fc0d691c8d131250ba6f42fd2b28ef28558d528ba8c558cb28804",
"sha256:73920d167a0a4d1006f5f3b9a3efce6f0e5e883a99599d38206d43f27697df00",
"sha256:8a671732b87ae423e34b51139628123bc0306c2cb85c226e71b28d3d57d7e42a",
"sha256:8d517e8fda2efebca27c2018e14c90ed7dc3f04d7098b3da2912e62a1a5585fe",
"sha256:9475a008eb7279e20d400c76471843c321b46acacc7ee3de0b47233a1e3fa2cf",
"sha256:96947b8cd7b3148fb0e6549fcb31258a736595d6f2a599f8cd450e9a80a14781",
"sha256:abf229f24daa93f67ac53e2e17c8798a71a01711eb9fcdd029abba8637164338",
"sha256:b1ab012f276df584beb74f81acb63905762c25803ece647016613c3d6ad4e432",
"sha256:b22b33f6f0071fe57cb4e9158f353c88d41e739a3ec0d76f7b704539e7076427",
"sha256:b3b2d53274858e50ad2ffdd6d97ce1d014e1e530f82ec8b307edd5d4c921badf",
"sha256:bab26a729befc7b9fab9ded1bba9c51b785188b79f8a2796ba03e7e734269e2e",
"sha256:daa1a593629aa49f506eddc9d23dc7f89b35693b90e1fbcd4480182d1203ea90",
"sha256:dd111280ce40e89fd17b19c1269fd1b74a30fce9d44a550840e86edb33924eb8",
"sha256:e0b86084f1e2e78c451994410de756deba206884d6bed68d5a3d7f39ff5fea1d",
"sha256:eb86520753560a7e89639500e2a254bb6f683342af598088cb72c73edcad21e6",
"sha256:ff18c5c40a38d41811c23e2480615425c97ea81fd7e9118b8b899c512d97c737"
],
"index": "pypi",
"version": "==2.7.6.1"
},
"python-mimeparse": {
"hashes": [
"sha256:76e4b03d700a641fd7761d3cd4fdbbdcd787eade1ebfac43f877016328334f78",
"sha256:a295f03ff20341491bfe4717a39cd0a8cc9afad619ba44b77e86b0ab8a2b8282"
],
"version": "==1.6.0"
},
"six": {
"hashes": [
"sha256:3350809f0555b11f552448330d0b52d5f24c91a322ea4a15ef22629740f3761c",
"sha256:d16a0141ec1a18405cd4ce8b4613101da75da0e9a7aec5bdd4fa804d0e0eba73"
],
"version": "==1.12.0"
},
"solrclient": {
"git": "https://github.com/alanorth/SolrClient.git",
"ref": "c629e3475be37c82770b2be61748be7e29882648"
}
},
"develop": {
"atomicwrites": {
"hashes": [
"sha256:0312ad34fcad8fac3704d441f7b317e50af620823353ec657a53e981f92920c0",
"sha256:ec9ae8adaae229e4f8446952d204a3e4b5fdd2d099f9be3aaf556120135fb3ee"
],
"version": "==1.2.1"
},
"attrs": {
"hashes": [
"sha256:10cbf6e27dbce8c30807caf056c8eb50917e0eaafe86347671b57254006c3e69",
"sha256:ca4be454458f9dec299268d472aaa5a11f67a4ff70093396e1ceae9c76cf4bbb"
],
"version": "==18.2.0"
},
"backcall": {
"hashes": [
"sha256:38ecd85be2c1e78f77fd91700c76e14667dc21e2713b63876c0eb901196e01e4",
"sha256:bbbf4b1e5cd2bdb08f915895b51081c041bac22394fdfcfdfbe9f14b77c08bf2"
],
"version": "==0.1.0"
},
"decorator": {
"hashes": [
"sha256:2c51dff8ef3c447388fe5e4453d24a2bf128d3a4c32af3fabef1f01c6851ab82",
"sha256:c39efa13fbdeb4506c476c9b3babf6a718da943dab7811c206005a4a956c080c"
],
"version": "==4.3.0"
},
"flake8": {
"hashes": [
"sha256:6a35f5b8761f45c5513e3405f110a86bea57982c3b75b766ce7b65217abe1670",
"sha256:c01f8a3963b3571a8e6bd7a4063359aff90749e160778e03817cd9b71c9e07d2"
],
"index": "pypi",
"version": "==3.6.0"
},
"ipython": {
"hashes": [
"sha256:6a9496209b76463f1dec126ab928919aaf1f55b38beb9219af3fe202f6bbdd12",
"sha256:f69932b1e806b38a7818d9a1e918e5821b685715040b48e59c657b3c7961b742"
],
"index": "pypi",
"version": "==7.2.0"
},
"ipython-genutils": {
"hashes": [
"sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8",
"sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"
],
"version": "==0.2.0"
},
"jedi": {
"hashes": [
"sha256:571702b5bd167911fe9036e5039ba67f820d6502832285cde8c881ab2b2149fd",
"sha256:c8481b5e59d34a5c7c42e98f6625e633f6ef59353abea6437472c7ec2093f191"
],
"version": "==0.13.2"
},
"mccabe": {
"hashes": [
"sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42",
"sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"
],
"version": "==0.6.1"
},
"more-itertools": {
"hashes": [
"sha256:38a936c0a6d98a38bcc2d03fdaaedaba9f412879461dd2ceff8d37564d6522e4",
"sha256:c0a5785b1109a6bd7fac76d6837fd1feca158e54e521ccd2ae8bfe393cc9d4fc",
"sha256:fe7a7cae1ccb57d33952113ff4fa1bc5f879963600ed74918f1236e212ee50b9"
],
"version": "==5.0.0"
},
"parso": {
"hashes": [
"sha256:35704a43a3c113cce4de228ddb39aab374b8004f4f2407d070b6a2ca784ce8a2",
"sha256:895c63e93b94ac1e1690f5fdd40b65f07c8171e3e53cbd7793b5b96c0e0a7f24"
],
"version": "==0.3.1"
},
"pexpect": {
"hashes": [
"sha256:2a8e88259839571d1251d278476f3eec5db26deb73a70be5ed5dc5435e418aba",
"sha256:3fbd41d4caf27fa4a377bfd16fef87271099463e6fa73e92a52f92dfee5d425b"
],
"markers": "sys_platform != 'win32'",
"version": "==4.6.0"
},
"pickleshare": {
"hashes": [
"sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca",
"sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"
],
"version": "==0.7.5"
},
"pluggy": {
"hashes": [
"sha256:8ddc32f03971bfdf900a81961a48ccf2fb677cf7715108f85295c67405798616",
"sha256:980710797ff6a041e9a73a5787804f848996ecaa6f8a1b1e08224a5894f2074a"
],
"version": "==0.8.1"
},
"prompt-toolkit": {
"hashes": [
"sha256:c1d6aff5252ab2ef391c2fe498ed8c088066f66bc64a8d5c095bbf795d9fec34",
"sha256:d4c47f79b635a0e70b84fdb97ebd9a274203706b1ee5ed44c10da62755cf3ec9",
"sha256:fd17048d8335c1e6d5ee403c3569953ba3eb8555d710bfc548faf0712666ea39"
],
"version": "==2.0.7"
},
"ptyprocess": {
"hashes": [
"sha256:923f299cc5ad920c68f2bc0bc98b75b9f838b93b599941a6b63ddbc2476394c0",
"sha256:d7cc528d76e76342423ca640335bd3633420dc1366f258cb31d05e865ef5ca1f"
],
"version": "==0.6.0"
},
"py": {
"hashes": [
"sha256:bf92637198836372b520efcba9e020c330123be8ce527e535d185ed4b6f45694",
"sha256:e76826342cefe3c3d5f7e8ee4316b80d1dd8a300781612ddbc765c17ba25a6c6"
],
"version": "==1.7.0"
},
"pycodestyle": {
"hashes": [
"sha256:cbc619d09254895b0d12c2c691e237b2e91e9b2ecf5e84c26b35400f93dcfb83",
"sha256:cbfca99bd594a10f674d0cd97a3d802a1fdef635d4361e1a2658de47ed261e3a"
],
"version": "==2.4.0"
},
"pyflakes": {
"hashes": [
"sha256:9a7662ec724d0120012f6e29d6248ae3727d821bba522a0e6b356eff19126a49",
"sha256:f661252913bc1dbe7fcfcbf0af0db3f42ab65aabd1a6ca68fe5d466bace94dae"
],
"version": "==2.0.0"
},
"pygments": {
"hashes": [
"sha256:5ffada19f6203563680669ee7f53b64dabbeb100eb51b61996085e99c03b284a",
"sha256:e8218dd399a61674745138520d0d4cf2621d7e032439341bc3f647bff125818d"
],
"version": "==2.3.1"
},
"pytest": {
"hashes": [
"sha256:41568ea7ecb4a68d7f63837cf65b92ce8d0105e43196ff2b26622995bb3dc4b2",
"sha256:c3c573a29d7c9547fb90217ece8a8843aa0c1328a797e200290dc3d0b4b823be"
],
"index": "pypi",
"version": "==4.1.1"
},
"six": {
"hashes": [
"sha256:3350809f0555b11f552448330d0b52d5f24c91a322ea4a15ef22629740f3761c",
"sha256:d16a0141ec1a18405cd4ce8b4613101da75da0e9a7aec5bdd4fa804d0e0eba73"
],
"version": "==1.12.0"
},
"traitlets": {
"hashes": [
"sha256:9c4bd2d267b7153df9152698efb1050a5d84982d3384a37b2c1f7723ba3e7835",
"sha256:c6cb5e6f57c5a9bdaa40fa71ce7b4af30298fbab9ece9815b5d995ab6217c7d9"
],
"version": "==4.3.2"
},
"wcwidth": {
"hashes": [
"sha256:3df37372226d6e63e1b1e1eda15c594bca98a22d33a23832a90998faa96bc65e",
"sha256:f4ebe71925af7b40a864553f761ed559b43544f8f71746c2d756c7fe788ade7c"
],
"version": "==0.1.7"
}
}
}

View File

@ -1,20 +1,94 @@
# CGSpace Statistics API
A quick and dirty REST API to expose Solr view and download statistics for items in a DSpace repository.
# DSpace Statistics API [![Build Status](https://travis-ci.org/ilri/dspace-statistics-api.svg?branch=master)](https://travis-ci.org/ilri/dspace-statistics-api)
DSpace stores item view and download events in a Solr "statistics" core. This information is available for use in the various DSpace user interfaces, but is not exposed externally via any APIs. The DSpace 4+ [REST API](https://wiki.duraspace.org/display/DSDOC5x/REST+API), for example, only exposes information about communities, collections, item metadata, and bitstreams.
Written and tested in Python 3.6. SolrClient (0.2.1) does not currently run in Python 3.7.0.
This project contains an indexer and a [Falcon-based](https://falcon.readthedocs.io/) web application to make the statistics available via simple REST API. You can read more about the Solr queries used to gather the item view and download statistics on the [DSpace wiki](https://wiki.duraspace.org/display/DSPACE/Solr).
## Requirements
- Python 3.5+
- PostgreSQL version 9.5+ (due to [`UPSERT` support](https://wiki.postgresql.org/wiki/UPSERT))
- DSpace with [Solr usage statistics enabled](https://wiki.duraspace.org/display/DSDOC5x/SOLR+Statistics) (tested with 5.x)
## Installation
Create a virtual environment and run it:
Create a Python virtual environment and install the dependencies:
$ virtualenv -p /usr/bin/python3.6 venv
$ . venv/bin/activate
$ pip install falcon gunicorn SolrClient
$ gunicorn app:api
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt
## Running
Set up the environment variables for Solr and PostgreSQL:
$ export SOLR_SERVER=http://localhost:8080/solr
$ export DATABASE_NAME=dspacestatistics
$ export DATABASE_USER=dspacestatistics
$ export DATABASE_PASS=dspacestatistics
$ export DATABASE_HOST=localhost
Index the Solr statistics core to populate the PostgreSQL database:
$ python -m dspace_statistics_api.indexer
Run the REST API:
$ gunicorn dspace_statistics_api.app
Test to see if there are any statistics:
$ curl 'http://localhost:8000/items?limit=1'
## Testing
Install development packages using pip:
$ pip install -r requirements-dev.txt
Run tests:
$ pytest
## Deployment
There are example systemd service and timer units in the `contrib` directory. The API service listens on localhost by default so you will need to expose it publicly using a web server like nginx.
An example nginx configuration is:
```
server {
#...
location ~ /rest/statistics/?(.*) {
access_log /var/log/nginx/statistics.log;
proxy_pass http://statistics_api/$1$is_args$args;
}
}
upstream statistics_api {
server 127.0.0.1:5000;
}
```
This would expose the API at `/rest/statistics`.
## Using the API
The API exposes the following endpoints:
- GET `/`return a basic API documentation page.
- GET `/items`return views and downloads for all items that Solr knows about¹. Accepts `limit` and `page` query parameters for pagination of results (`limit` must be an integer between 1 and 100, and `page` must be an integer greater than or equal to 0).
- GET `/item/id`return views and downloads for a single item (`id` must be a positive integer). Returns HTTP 404 if an item id is not found.
The item id is the *internal* id for an item. You can get these from the standard DSpace REST API.
¹ We are querying the Solr statistics core, which technically only knows about items that have either views or downloads. If an item is not present here you can assume it has zero views and zero downloads, but not necessarily that it does not exist in the repository.
## Todo
- Ability to return a paginated list of items (on a different route?)
- Add API documentation
- Better logging
- Version API
- Use JSON in PostgreSQL
- Make community and collection stats available
- Switch to [Python 3.6+ f-string syntax](https://realpython.com/python-f-strings/)
## License
This work is licensed under the [GPLv3](https://www.gnu.org/licenses/gpl-3.0.en.html).
The license allows you to use and modify the work for personal and commercial purposes, but if you distribute the work you must provide users with a means to access the source code for the version you are distributing. Read more about the [GPLv3 at TL;DR Legal](https://tldrlegal.com/license/gnu-general-public-license-v3-(gpl-3)).

45
app.py
View File

@ -1,45 +0,0 @@
# Tested with Python 3.6
# See DSpace Solr docs for tips about parameters
# https://wiki.duraspace.org/display/DSPACE/Solr
from config import SOLR_CORE
from database import database_connection
import falcon
from solr import solr_connection
db = database_connection()
solr = solr_connection()
class ItemResource:
def on_get(self, req, resp, item_id):
"""Handles GET requests"""
cursor = db.cursor()
# get item views (and catch the TypeError if item doesn't have any views)
cursor.execute('SELECT views FROM itemviews WHERE id={0}'.format(item_id))
try:
views = cursor.fetchone()['views']
except:
views = 0
# get item downloads (and catch the TypeError if item doesn't have any downloads)
cursor.execute('SELECT downloads FROM itemdownloads WHERE id={0}'.format(item_id))
try:
downloads = cursor.fetchone()['downloads']
except:
downloads = 0
cursor.close()
statistics = {
'id': item_id,
'views': views,
'downloads': downloads
}
resp.media = statistics
api = falcon.API()
api.add_route('/item/{item_id:int}', ItemResource())
# vim: set sw=4 ts=4 expandtab:

View File

@ -1,9 +0,0 @@
import os
# Check if Solr connection information was provided in the environment
SOLR_SERVER = os.environ.get('SOLR_SERVER', 'http://localhost:8080/solr')
SOLR_CORE = os.environ.get('SOLR_CORE', 'statistics')
SQLITE_DB = os.environ.get('SQLITE_DB', 'statistics.db')
# vim: set sw=4 ts=4 expandtab:

View File

@ -1,18 +0,0 @@
[Unit]
Description=CGSpace Statistics API
After=network.target
[Service]
Environment=SOLR_SERVER=http://localhost:8081/solr
Environment=SOLR_CORE=statistics
User=nobody
Group=nogroup
WorkingDirectory=/opt/ilri/cgspace-statistics-api
ExecStart=/opt/ilri/cgspace-statistics-api/venv/bin/gunicorn \
--bind 127.0.0.1:5000 \
app:api
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,20 @@
[Unit]
Description=DSpace Statistics API
After=network.target
[Service]
Environment=DATABASE_NAME=dspacestatistics
Environment=DATABASE_USER=dspacestatistics
Environment=DATABASE_PASS=dspacestatistics
Environment=DATABASE_HOST=localhost
User=nobody
Group=nogroup
WorkingDirectory=/var/lib/dspace-statistics-api
ExecStart=/var/lib/dspace-statistics-api/venv/bin/gunicorn \
--bind 127.0.0.1:5000 \
dspace_statistics_api.app
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,17 @@
[Unit]
Description=DSpace Statistics Indexer
After=tomcat7.target
[Service]
Environment=SOLR_SERVER=http://localhost:8081/solr
Environment=DATABASE_NAME=dspacestatistics
Environment=DATABASE_USER=dspacestatistics
Environment=DATABASE_PASS=dspacestatistics
Environment=DATABASE_HOST=localhost
User=nobody
Group=nogroup
WorkingDirectory=/var/lib/dspace-statistics-api
ExecStart=/var/lib/dspace-statistics-api/venv/bin/python -m dspace_statistics_api.indexer
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,12 @@
[Unit]
Description=DSpace Statistics Indexer
[Timer]
# twice a day, at 6AM and 6PM
OnCalendar=*-*-* 06,18:00:00
# Add a random delay of 03600 seconds
RandomizedDelaySec=3600
Persistent=true
[Install]
WantedBy=timers.target

View File

@ -1,11 +0,0 @@
from config import SQLITE_DB
import sqlite3
def database_connection():
connection = sqlite3.connect(SQLITE_DB)
# allow iterating over row results by column key
connection.row_factory = sqlite3.Row
return connection
# vim: set sw=4 ts=4 expandtab:

View File

View File

@ -0,0 +1,81 @@
from .database import DatabaseManager
import falcon
class RootResource:
def on_get(self, req, resp):
resp.status = falcon.HTTP_200
resp.content_type = 'text/html'
with open('dspace_statistics_api/docs/index.html', 'r') as f:
resp.body = f.read()
class AllItemsResource:
def on_get(self, req, resp):
"""Handles GET requests"""
# Return HTTPBadRequest if id parameter is not present and valid
limit = req.get_param_as_int("limit", min=0, max=100) or 100
page = req.get_param_as_int("page", min=0) or 0
offset = limit * page
with DatabaseManager() as db:
db.set_session(readonly=True)
with db.cursor() as cursor:
# get total number of items so we can estimate the pages
cursor.execute('SELECT COUNT(id) FROM items')
pages = round(cursor.fetchone()[0] / limit)
# get statistics, ordered by id, and use limit and offset to page through results
cursor.execute('SELECT id, views, downloads FROM items ORDER BY id ASC LIMIT {} OFFSET {}'.format(limit, offset))
# create a list to hold dicts of item stats
statistics = list()
# iterate over results and build statistics object
for item in cursor:
statistics.append({'id': item['id'], 'views': item['views'], 'downloads': item['downloads']})
message = {
'currentPage': page,
'totalPages': pages,
'limit': limit,
'statistics': statistics
}
resp.media = message
class ItemResource:
def on_get(self, req, resp, item_id):
"""Handles GET requests"""
with DatabaseManager() as db:
db.set_session(readonly=True)
with db.cursor() as cursor:
cursor = db.cursor()
cursor.execute('SELECT views, downloads FROM items WHERE id={}'.format(item_id))
if cursor.rowcount == 0:
raise falcon.HTTPNotFound(
title='Item not found',
description='The item with id "{}" was not found.'.format(item_id)
)
else:
results = cursor.fetchone()
statistics = {
'id': item_id,
'views': results['views'],
'downloads': results['downloads']
}
resp.media = statistics
api = application = falcon.API()
api.add_route('/', RootResource())
api.add_route('/items', AllItemsResource())
api.add_route('/item/{item_id:int}', ItemResource())
# vim: set sw=4 ts=4 expandtab:

View File

@ -0,0 +1,12 @@
import os
# Check if Solr connection information was provided in the environment
SOLR_SERVER = os.environ.get('SOLR_SERVER', 'http://localhost:8080/solr')
DATABASE_NAME = os.environ.get('DATABASE_NAME', 'dspacestatistics')
DATABASE_USER = os.environ.get('DATABASE_USER', 'dspacestatistics')
DATABASE_PASS = os.environ.get('DATABASE_PASS', 'dspacestatistics')
DATABASE_HOST = os.environ.get('DATABASE_HOST', 'localhost')
DATABASE_PORT = os.environ.get('DATABASE_PORT', '5432')
# vim: set sw=4 ts=4 expandtab:

View File

@ -0,0 +1,30 @@
from .config import DATABASE_NAME
from .config import DATABASE_USER
from .config import DATABASE_PASS
from .config import DATABASE_HOST
from .config import DATABASE_PORT
import falcon
import psycopg2
import psycopg2.extras
class DatabaseManager():
'''Manage database connection.'''
def __init__(self):
self._connection_uri = 'dbname={} user={} password={} host={} port={}'.format(DATABASE_NAME, DATABASE_USER, DATABASE_PASS, DATABASE_HOST, DATABASE_PORT)
def __enter__(self):
try:
self._connection = psycopg2.connect(self._connection_uri, cursor_factory=psycopg2.extras.DictCursor)
except psycopg2.OperationalError:
title = '500 Internal Server Error'
description = 'Could not connect to database'
raise falcon.HTTPInternalServerError(title, description)
return self._connection
def __exit__(self, exc_type, exc_value, exc_traceback):
self._connection.close()
# vim: set sw=4 ts=4 expandtab:

View File

@ -0,0 +1,20 @@
<!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset="UTF-8">
<title>DSpace Statistics API</title>
</head>
<body>
<h1>DSpace Statistics API</h1>
<p>This site is running the <a href="https://github.com/ilri/dspace-statistics-api" title="DSpace Statistics API project">DSpace Statistics API</a>. The following endpoints are available:</p>
<ul>
<li>GET <code>/</code>return a basic API documentation page.</li>
<li>GET <code>/items</code>return views and downloads for all items that Solr knows about¹. Accepts <code>limit</code> and <code>page</code> query parameters for pagination of results (<code>limit</code> must be an integer between 1 and 100, and <code>page</code> must be an integer greater than or equal to 0).</li>
<li>GET <code>/item/id</code>return views and downloads for a single item (<code>id</code> must be a positive integer). Returns HTTP 404 if an item id is not found.</li>
</ul>
<p>The item id is the <em>internal</em> id for an item. You can get these from the standard DSpace REST API.</p>
<p>¹ We are querying the Solr statistics core, which technically only knows about items that have either views or downloads. If an item is not present here you can assume it has zero views and zero downloads, but not necessarily that it does not exist in the repository.</code>
</body>
</html>

View File

@ -0,0 +1,236 @@
#
# indexer.py
#
# Copyright 2018 Alan Orth.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# ---
#
# Connects to a DSpace Solr statistics core and ingests item views and downloads
# into a PostgreSQL database for use by other applications (like an API).
#
# This script is written for Python 3.5+ and requires several modules that you
# can install with pip (I recommend using a Python virtual environment):
#
# $ pip install SolrClient psycopg2-binary
#
# See: https://solrclient.readthedocs.io/en/latest/SolrClient.html
# See: https://wiki.duraspace.org/display/DSPACE/Solr
from .database import DatabaseManager
import json
import psycopg2.extras
import re
import requests
from .solr import solr_connection
# Enumerate the cores in Solr to determine if statistics have been sharded into
# yearly shards by DSpace's stats-util or not (for example: statistics-2018).
def get_statistics_shards():
# Initialize an empty list for statistics core years
statistics_core_years = []
# URL for Solr status to check active cores
solr_url = solr.host + '/admin/cores?action=STATUS&wt=json'
res = requests.get(solr_url)
if res.status_code == requests.codes.ok:
data = res.json()
# Iterate over active cores from Solr's STATUS response (cores are in
# the status array of this response).
for core in data['status']:
# Pattern to match, for example: statistics-2018
pattern = re.compile('^statistics-[0-9]{4}$')
if not pattern.match(core):
continue
# Append current core to list
statistics_core_years.append(core)
# Initialize a string to hold our shards (may end up being empty if the Solr
# core has not been processed by stats-util).
shards = str()
if len(statistics_core_years) > 0:
# Begin building a string of shards starting with the default one
shards = '{}/statistics'.format(solr.host)
for core in statistics_core_years:
# Create a comma-separated list of shards to pass to our Solr query
#
# See: https://wiki.apache.org/solr/DistributedSearch
shards += ',{}/{}'.format(solr.host, core)
# Return the string of shards, which may actually be empty. Solr doesn't
# seem to mind if the shards query parameter is empty and I haven't seen
# any negative performance impact so this should be fine.
return shards
def index_views():
# get total number of distinct facets for items with a minimum of 1 view,
# otherwise Solr returns all kinds of weird ids that are actually not in
# the database. Also, stats are expensive, but we need stats.calcdistinct
# so we can get the countDistinct summary.
#
# see: https://lucene.apache.org/solr/guide/6_6/the-stats-component.html
res = solr.query('statistics', {
'q': 'type:2',
'fq': 'isBot:false AND statistics_type:view',
'facet': True,
'facet.field': 'id',
'facet.mincount': 1,
'facet.limit': 1,
'facet.offset': 0,
'stats': True,
'stats.field': 'id',
'stats.calcdistinct': True,
'shards': shards
}, rows=0)
try:
# get total number of distinct facets (countDistinct)
results_totalNumFacets = json.loads(res.get_json())['stats']['stats_fields']['id']['countDistinct']
except TypeError:
print('No item views to index, exiting.')
exit(0)
# divide results into "pages" (cast to int to effectively round down)
results_per_page = 100
results_num_pages = int(results_totalNumFacets / results_per_page)
results_current_page = 0
with DatabaseManager() as db:
with db.cursor() as cursor:
# create an empty list to store values for batch insertion
data = []
while results_current_page <= results_num_pages:
print('Indexing item views (page {} of {})'.format(results_current_page, results_num_pages))
res = solr.query('statistics', {
'q': 'type:2',
'fq': 'isBot:false AND statistics_type:view',
'facet': True,
'facet.field': 'id',
'facet.mincount': 1,
'facet.limit': results_per_page,
'facet.offset': results_current_page * results_per_page,
'shards': shards
}, rows=0)
# SolrClient's get_facets() returns a dict of dicts
views = res.get_facets()
# in this case iterate over the 'id' dict and get the item ids and views
for item_id, item_views in views['id'].items():
data.append((item_id, item_views))
# do a batch insert of values from the current "page" of results
sql = 'INSERT INTO items(id, views) VALUES %s ON CONFLICT(id) DO UPDATE SET views=excluded.views'
psycopg2.extras.execute_values(cursor, sql, data, template='(%s, %s)')
db.commit()
# clear all items from the list so we can populate it with the next batch
data.clear()
results_current_page += 1
def index_downloads():
# get the total number of distinct facets for items with at least 1 download
res = solr.query('statistics', {
'q': 'type:0',
'fq': 'isBot:false AND statistics_type:view AND bundleName:ORIGINAL',
'facet': True,
'facet.field': 'owningItem',
'facet.mincount': 1,
'facet.limit': 1,
'facet.offset': 0,
'stats': True,
'stats.field': 'owningItem',
'stats.calcdistinct': True,
'shards': shards
}, rows=0)
try:
# get total number of distinct facets (countDistinct)
results_totalNumFacets = json.loads(res.get_json())['stats']['stats_fields']['owningItem']['countDistinct']
except TypeError:
print('No item downloads to index, exiting.')
exit(0)
# divide results into "pages" (cast to int to effectively round down)
results_per_page = 100
results_num_pages = int(results_totalNumFacets / results_per_page)
results_current_page = 0
with DatabaseManager() as db:
with db.cursor() as cursor:
# create an empty list to store values for batch insertion
data = []
while results_current_page <= results_num_pages:
print('Indexing item downloads (page {} of {})'.format(results_current_page, results_num_pages))
res = solr.query('statistics', {
'q': 'type:0',
'fq': 'isBot:false AND statistics_type:view AND bundleName:ORIGINAL',
'facet': True,
'facet.field': 'owningItem',
'facet.mincount': 1,
'facet.limit': results_per_page,
'facet.offset': results_current_page * results_per_page,
'shards': shards
}, rows=0)
# SolrClient's get_facets() returns a dict of dicts
downloads = res.get_facets()
# in this case iterate over the 'owningItem' dict and get the item ids and downloads
for item_id, item_downloads in downloads['owningItem'].items():
data.append((item_id, item_downloads))
# do a batch insert of values from the current "page" of results
sql = 'INSERT INTO items(id, downloads) VALUES %s ON CONFLICT(id) DO UPDATE SET downloads=excluded.downloads'
psycopg2.extras.execute_values(cursor, sql, data, template='(%s, %s)')
db.commit()
# clear all items from the list so we can populate it with the next batch
data.clear()
results_current_page += 1
solr = solr_connection()
with DatabaseManager() as db:
with db.cursor() as cursor:
# create table to store item views and downloads
cursor.execute('''CREATE TABLE IF NOT EXISTS items
(id INT PRIMARY KEY, views INT DEFAULT 0, downloads INT DEFAULT 0)''')
# commit the table creation before closing the database connection
db.commit()
shards = get_statistics_shards()
index_views()
index_downloads()
# vim: set sw=4 ts=4 expandtab:

View File

@ -1,6 +1,7 @@
from config import SOLR_SERVER
from .config import SOLR_SERVER
from SolrClient import SolrClient
def solr_connection():
connection = SolrClient(SOLR_SERVER)

View File

@ -1,106 +0,0 @@
#!/usr/bin/env python
#
# Tested with Python 3.6
# See DSpace Solr docs for tips about parameters
# https://wiki.duraspace.org/display/DSPACE/Solr
from config import SOLR_CORE
from database import database_connection
from solr import solr_connection
def index_views():
print("Populating database with item views.")
# determine the total number of items with views (aka Solr's numFound)
res = solr.query(SOLR_CORE, {
'q':'type:2',
'fq':'isBot:false AND statistics_type:view',
'facet':True,
'facet.field':'id',
}, rows=0)
# divide results into "pages" (numFound / 100)
results_numFound = res.get_num_found()
results_per_page = 100
results_num_pages = round(results_numFound / results_per_page)
results_current_page = 0
while results_current_page <= results_num_pages:
print('Page {0} of {1}.'.format(results_current_page, results_num_pages))
res = solr.query(SOLR_CORE, {
'q':'type:2',
'fq':'isBot:false AND statistics_type:view',
'facet':True,
'facet.field':'id',
'facet.limit':results_per_page,
'facet.offset':results_current_page * results_per_page
})
# make sure total number of results > 0
if res.get_num_found() > 0:
# SolrClient's get_facets() returns a dict of dicts
views = res.get_facets()
# in this case iterate over the 'id' dict and get the item ids and views
for item_id, item_views in views['id'].items():
db.execute('''REPLACE INTO itemviews VALUES (?, ?)''', (item_id, item_views))
db.commit()
results_current_page += 1
def index_downloads():
print("Populating database with item downloads.")
# determine the total number of items with downloads (aka Solr's numFound)
res = solr.query(SOLR_CORE, {
'q':'type:0',
'fq':'isBot:false AND statistics_type:view AND bundleName:ORIGINAL',
'facet':True,
'facet.field':'owningItem',
}, rows=0)
# divide results into "pages" (numFound / 100)
results_numFound = res.get_num_found()
results_per_page = 100
results_num_pages = round(results_numFound / results_per_page)
results_current_page = 0
while results_current_page <= results_num_pages:
print('Page {0} of {1}.'.format(results_current_page, results_num_pages))
res = solr.query(SOLR_CORE, {
'q':'type:0',
'fq':'isBot:false AND statistics_type:view AND bundleName:ORIGINAL',
'facet':True,
'facet.field':'owningItem',
'facet.limit':results_per_page,
'facet.offset':results_current_page * results_per_page
})
# make sure total number of results > 0
if res.get_num_found() > 0:
# SolrClient's get_facets() returns a dict of dicts
downloads = res.get_facets()
# in this case iterate over the 'owningItem' dict and get the item ids and downloads
for item_id, item_downloads in downloads['owningItem'].items():
db.execute('''REPLACE INTO itemdownloads VALUES (?, ?)''', (item_id, item_downloads))
db.commit()
results_current_page += 1
db = database_connection()
solr = solr_connection()
# use separate views and downloads tables so we can REPLACE INTO carelessly (ie, item may have views but no downloads)
db.execute('''CREATE TABLE IF NOT EXISTS itemviews
(id integer primary key, views integer)''')
db.execute('''CREATE TABLE IF NOT EXISTS itemdownloads
(id integer primary key, downloads integer)''')
index_views()
index_downloads()
db.close()
# vim: set sw=4 ts=4 expandtab:

4
pytest.ini Normal file
View File

@ -0,0 +1,4 @@
[pytest]
addopts= -rsxX -s -v --strict
filterwarnings =
error::UserWarning

25
requirements-dev.txt Normal file
View File

@ -0,0 +1,25 @@
-i https://pypi.org/simple
atomicwrites==1.2.1
attrs==18.2.0
backcall==0.1.0
decorator==4.3.0
flake8==3.6.0
ipython-genutils==0.2.0
ipython==7.2.0
jedi==0.13.2
mccabe==0.6.1
more-itertools==5.0.0
parso==0.3.1
pexpect==4.6.0 ; sys_platform != 'win32'
pickleshare==0.7.5
pluggy==0.8.1
prompt-toolkit==2.0.7
ptyprocess==0.6.0
py==1.7.0
pycodestyle==2.4.0
pyflakes==2.0.0
pygments==2.3.1
pytest==4.1.1
six==1.12.0
traitlets==4.3.2
wcwidth==0.1.7

7
requirements.txt Normal file
View File

@ -0,0 +1,7 @@
-i https://pypi.org/simple
falcon==1.4.1
git+https://github.com/alanorth/SolrClient.git@c629e3475be37c82770b2be61748be7e29882648#egg=solrclient
gunicorn==19.9.0
psycopg2-binary==2.7.6.1
python-mimeparse==1.6.0
six==1.12.0

0
tests/__init__.py Normal file
View File

77226
tests/dspacestatistics.sql Normal file

File diff suppressed because it is too large Load Diff

67
tests/test_api.py Normal file
View File

@ -0,0 +1,67 @@
from falcon import testing
import json
import pytest
from dspace_statistics_api.app import api
@pytest.fixture
def client():
return testing.TestClient(api)
def test_get_docs(client):
'''Test requesting the documentation at the root.'''
response = client.simulate_get('/')
assert isinstance(response.content, bytes)
assert response.status_code == 200
def test_get_item(client):
'''Test requesting a single item.'''
response = client.simulate_get('/item/17')
response_doc = json.loads(response.text)
assert isinstance(response_doc['downloads'], int)
assert isinstance(response_doc['id'], int)
assert isinstance(response_doc['views'], int)
assert response.status_code == 200
def test_get_missing_item(client):
'''Test requesting a single non-existing item.'''
response = client.simulate_get('/item/1')
assert response.status_code == 404
def test_get_items(client):
'''Test requesting 100 items.'''
response = client.simulate_get('/items', query_string='limit=100')
response_doc = json.loads(response.text)
assert isinstance(response_doc['currentPage'], int)
assert isinstance(response_doc['totalPages'], int)
assert isinstance(response_doc['statistics'], list)
assert response.status_code == 200
def test_get_items_invalid_limit(client):
'''Test requesting 100 items with an invalid limit parameter.'''
response = client.simulate_get('/items', query_string='limit=101')
assert response.status_code == 400
def test_get_items_invalid_page(client):
'''Test requesting 100 items with an invalid page parameter.'''
response = client.simulate_get('/items', query_string='page=-1')
assert response.status_code == 400