1
0
mirror of https://github.com/ilri/dspace-statistics-api.git synced 2024-12-22 12:42:19 +01:00

Use Python's native json instead of ujson

Falcon can optionally use ujson to speed up JSON (de)serialization,
but Falcon's already really fast and requiring ujson actually makes
deployment trickier in some cases (for example in Docker containers
that are based on Alpine Linux).

Here are some tests of Falcon 1.4.1 on Python 3.5 from my laptop:

    1. falcon...............60172 req/sec or 16.62 μs/req (36x)
    2. falcon-ext...........34186 req/sec or 29.25 μs/req (20x)
    3. bottle...............32924 req/sec or 30.37 μs/req (20x)
    4. werkzeug.............11948 req/sec or 83.70 μs/req (7x)
    5. flask.................6654 req/sec or 150.30 μs/req (4x)
    6. django................4565 req/sec or 219.04 μs/req (3x)
    7. pecan.................1672 req/sec or 598.19 μs/req (1x)

The tests were conducted with Falcon's official Docker benchmarking
tools on my Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz on Arch Linux.

See: https://github.com/falconry/falcon/tree/master/docker
This commit is contained in:
Alan Orth 2018-10-24 14:08:23 +03:00
parent 62142eb79e
commit 6fd2827a7c
Signed by: alanorth
GPG Key ID: 0FB860CC9C45B1B9
2 changed files with 3 additions and 4 deletions

View File

@ -31,7 +31,7 @@
# See: https://wiki.duraspace.org/display/DSPACE/Solr
from database import database_connection
import ujson
import json
import psycopg2.extras
from solr import solr_connection
@ -56,7 +56,7 @@ def index_views():
}, rows=0)
# get total number of distinct facets (countDistinct)
results_totalNumFacets = ujson.loads(res.get_json())['stats']['stats_fields']['id']['countDistinct']
results_totalNumFacets = json.loads(res.get_json())['stats']['stats_fields']['id']['countDistinct']
# divide results into "pages" (cast to int to effectively round down)
results_per_page = 100
@ -115,7 +115,7 @@ def index_downloads():
}, rows=0)
# get total number of distinct facets (countDistinct)
results_totalNumFacets = ujson.loads(res.get_json())['stats']['stats_fields']['owningItem']['countDistinct']
results_totalNumFacets = json.loads(res.get_json())['stats']['stats_fields']['owningItem']['countDistinct']
# divide results into "pages" (cast to int to effectively round down)
results_per_page = 100

View File

@ -9,5 +9,4 @@ python-mimeparse==1.6.0
requests==2.19.1
six==1.11.0
SolrClient==0.2.1
ujson==1.35
urllib3==1.23