1
0
mirror of https://github.com/ilri/dspace-statistics-api.git synced 2024-12-26 22:44:29 +01:00
Commit Graph

123 Commits

Author SHA1 Message Date
6fd2827a7c
Use Python's native json instead of ujson
Falcon can optionally use ujson to speed up JSON (de)serialization,
but Falcon's already really fast and requiring ujson actually makes
deployment trickier in some cases (for example in Docker containers
that are based on Alpine Linux).

Here are some tests of Falcon 1.4.1 on Python 3.5 from my laptop:

    1. falcon...............60172 req/sec or 16.62 μs/req (36x)
    2. falcon-ext...........34186 req/sec or 29.25 μs/req (20x)
    3. bottle...............32924 req/sec or 30.37 μs/req (20x)
    4. werkzeug.............11948 req/sec or 83.70 μs/req (7x)
    5. flask.................6654 req/sec or 150.30 μs/req (4x)
    6. django................4565 req/sec or 219.04 μs/req (3x)
    7. pecan.................1672 req/sec or 598.19 μs/req (1x)

The tests were conducted with Falcon's official Docker benchmarking
tools on my Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz on Arch Linux.

See: https://github.com/falconry/falcon/tree/master/docker
2018-10-24 14:08:23 +03:00
62142eb79e
CHANGELOG.md: Move unreleased changes to v0.5.0 2018-10-24 12:02:42 +03:00
fda0321942
CHANGELOG.md: Add note about Solr in API component 2018-10-24 12:01:47 +03:00
963aa245c8
app.py: Don't initialize Solr connection
We only need Solr in the indexing component, not for the API itself.
2018-10-24 11:59:50 +03:00
568ff2eebb
CHANGELOG.md: Add note about nginx configuration 2018-10-23 14:56:44 +03:00
deecb8a10b
README.md: Add example nginx configuration 2018-10-23 14:55:36 +03:00
12f45d7c08
contrib: Adjust example path 2018-10-23 14:34:29 +03:00
f65089f9ce
CHANGELOG.md: Update and move to 0.4.3 release 2018-10-17 09:51:44 +03:00
1db5cf1c29
README.md: Grammar 2018-10-17 09:51:35 +03:00
e581c4b1aa
README.md: Improve documentation 2018-10-17 09:50:30 +03:00
e8d356c9ca
README.md: Add TODO about Python 3.6+ f-string syntax
They are faster.
2018-10-17 09:13:25 +03:00
34a9b8d629
CHANGELOG.md: Add unreleased changes for Travis CI 2018-10-14 19:02:09 +03:00
41e3d66a0e
.travis.yml: Only build master branch 2018-10-14 19:00:31 +03:00
9b2a6137b4
README.md: Add Travis CI badge
For now this is only an indicator that the Python requirements can
be satisfied and installed.
2018-10-14 18:58:12 +03:00
600b986f99
.travis.yml: Use Python 3.7-dev instead of 3.7
I don't think Travis supports Python 3.7 yet because the builds for
that version keep failing.
2018-10-14 18:57:30 +03:00
49a7790794
.travis.yml: Move script to one line 2018-10-14 18:53:45 +03:00
f2deba627c
.travis.yml: Run pip install as script
Basically for now there are no tests so I just want to just check
that requirements.txt is correct and that all dependencies can be
installed.
2018-10-14 18:47:14 +03:00
9323513794
README.md: Update instructions 2018-10-14 18:45:40 +03:00
daf15610f2
CHANGELOG.md: Update changes and move to 0.4.2 2018-10-05 00:19:18 +03:00
4ede966dbb
indexer.py: Fix logic error in SQL insert
This was inserting correctly on the first run, but subsequent runs
were inserting into the incorrect column on conflict. This made it
seem like there were downloads for items where there were none.
2018-10-05 00:16:24 +03:00
3580473a6d
README.md: Add TODO about JSON in PostgreSQL 2018-10-03 20:08:18 +03:00
071c24535f
README.md: Add TODO about API versions 2018-10-03 11:12:18 +03:00
4291aecac4
README.md: Formatting 2018-09-27 12:45:15 +03:00
46bf537e88
CHANGELOG.md: Add note about cursor change 2018-09-27 11:08:42 +03:00
eaca5354d3
app.py: Iterate directly on cursor
We don't need to create an intermediate variable for the results of
the SQL query because psycopg2's cursor is iterable.

See: http://initd.org/psycopg/docs/cursor.html
2018-09-27 11:03:44 +03:00
4600288ee4
CHANGELOG.md: Add note about ujson 2018-09-27 09:53:42 +03:00
8179563378
requirements.txt: pip freeze 2018-09-27 09:53:16 +03:00
b14c3eef4d
indexer.py: Use ujson instead of json
Falcon optionally makes use of the ujson library to speed up media
(de)serialization, error serialization, and query string parsing.

See: https://falcon.readthedocs.io/en/stable/user/install.html
2018-09-27 09:51:40 +03:00
71a789b13f
CHANGELOG.md. Add unreleased changes 2018-09-27 09:30:48 +03:00
c68ddacaa4
README.md: Add note about systemd units for deployment 2018-09-27 09:26:47 +03:00
9c9e79769e
README.md: Add TODO 2018-09-27 09:17:45 +03:00
2ad5ade556
README.md: Improve introduction 2018-09-27 09:12:52 +03:00
7412a09670
README.md: Improve introduction 2018-09-27 09:07:28 +03:00
bb744a00b8
README.md: Add requirements 2018-09-27 08:57:27 +03:00
7499b89d99
CHANGELOG.md: Move unreleased changes to v0.4.1 2018-09-27 08:15:54 +03:00
2c1e4952b1
indexer.py: Remove comment
I had left this there so I could remember how to get the number of
facets, but I don't need it anymore.
2018-09-26 23:27:48 +03:00
379f202c3f
CHANGELOG.md: Add unreleased changes 2018-09-26 23:26:48 +03:00
560fa6056d
README.md: Remove batch inserts from TODO 2018-09-26 23:25:35 +03:00
385a34e5d0
indexer.py: Use psycopg2's execute_values to batch inserts
Batch inserts are much faster than a series of individual inserts
because they drastically reduce the overhead caused by round-trip
communication with the server. My tests in development confirm:

  - cursor.execute(): 19 seconds
  - execute_values(): 14 seconds

I'm currently only working with 4,500 rows, but I will experiment
with larger data sets, as well as larger batches. For example, on
the PostgreSQL mailing list a user reports doing 10,000 rows with
a page size of 100.

See: http://initd.org/psycopg/docs/extras.html#psycopg2.extras.execute_values
See: https://github.com/psycopg/psycopg2/issues/491#issuecomment-276551038
2018-09-26 23:10:29 +03:00
d0ea62d2bd
database.py: Use one line for psycopg2 imports 2018-09-26 22:23:24 +03:00
366ae25b8e
README.md: Add link to psycopg2 issue about batch inserts 2018-09-26 22:23:08 +03:00
0f3054ae03
README.md: Add TODO about batch DB inserts 2018-09-26 16:31:13 +03:00
6bf34235d4
CHANGELOG.md: Move unreleased changes to version 0.4.0 2018-09-26 02:51:27 +03:00
e604d8ca81
indexer.py: Major refactor
Basically Solr's numFound has nothing to do with the actual number
of distinct facets that are returned. You need to use Solr's stats
component to get the number of distinct facets, aka countDistinct.
This is apparently deprecated in newer Solr versions, but we're on
version 4.10 and it works there.

Also, I realized that there is no need to return facets for items
without any views or downloads. Using facet.mincount=1 reduces the
result set size and also means we can store less data in the data-
base. The API returns HTTP 404 Not Found if an item is not in the
database anyways.

I can't figure it out exactly, but there is some weird issue with
Solr's facet results when you don't use facet.mincount=1. For some
reason you get tons of results with an id that doesn't even exist
in the document database, let alone as an actual DSpace item!

See: https://lucene.apache.org/solr/guide/6_6/the-stats-component.html
2018-09-26 02:41:10 +03:00
fc35b816f3
CHANGELOG.md: Add unreleased changes 2018-09-25 23:09:44 +03:00
9e6a2f7559
contrib/dspace-statistics-indexer.timer: Fix syntax
You can test OnCalendar strings using systemd-analyze calendar, eg:

    # systemd-analyze calendar '*-*-* 06:00:00,18:00:00'
    Failed to parse calendar specification '*-*-* 06:00:00,18:00:00':
    Invalid argument
    # systemd-analyze calendar '*-*-* 06,18:00:00'
    Normalized form: *-*-* 06,18:00:00
        Next elapse: Wed 2018-09-26 06:00:00 EEST
           (in UTC): Wed 2018-09-26 03:00:00 UTC
           From now: 6h left
2018-09-25 23:07:03 +03:00
46cfc3ffbc
CHANGELOG.md: Release version 0.3.2 2018-09-25 13:14:08 +03:00
2850035a4c
Return HTTP 404 when an item id is not found 2018-09-25 13:12:53 +03:00
c0b550109a
README.md: Improve wording 2018-09-25 12:24:52 +03:00
bfceffd84d
indexer.py: Improve inline documentation 2018-09-25 12:23:31 +03:00