Falcon can optionally use ujson to speed up JSON (de)serialization,
but Falcon's already really fast and requiring ujson actually makes
deployment trickier in some cases (for example in Docker containers
that are based on Alpine Linux).
Here are some tests of Falcon 1.4.1 on Python 3.5 from my laptop:
1. falcon...............60172 req/sec or 16.62 μs/req (36x)
2. falcon-ext...........34186 req/sec or 29.25 μs/req (20x)
3. bottle...............32924 req/sec or 30.37 μs/req (20x)
4. werkzeug.............11948 req/sec or 83.70 μs/req (7x)
5. flask.................6654 req/sec or 150.30 μs/req (4x)
6. django................4565 req/sec or 219.04 μs/req (3x)
7. pecan.................1672 req/sec or 598.19 μs/req (1x)
The tests were conducted with Falcon's official Docker benchmarking
tools on my Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz on Arch Linux.
See: https://github.com/falconry/falcon/tree/master/docker
This was inserting correctly on the first run, but subsequent runs
were inserting into the incorrect column on conflict. This made it
seem like there were downloads for items where there were none.
We don't need to create an intermediate variable for the results of
the SQL query because psycopg2's cursor is iterable.
See: http://initd.org/psycopg/docs/cursor.html
Batch inserts are much faster than a series of individual inserts
because they drastically reduce the overhead caused by round-trip
communication with the server. My tests in development confirm:
- cursor.execute(): 19 seconds
- execute_values(): 14 seconds
I'm currently only working with 4,500 rows, but I will experiment
with larger data sets, as well as larger batches. For example, on
the PostgreSQL mailing list a user reports doing 10,000 rows with
a page size of 100.
See: http://initd.org/psycopg/docs/extras.html#psycopg2.extras.execute_values
See: https://github.com/psycopg/psycopg2/issues/491#issuecomment-276551038
Basically Solr's numFound has nothing to do with the actual number
of distinct facets that are returned. You need to use Solr's stats
component to get the number of distinct facets, aka countDistinct.
This is apparently deprecated in newer Solr versions, but we're on
version 4.10 and it works there.
Also, I realized that there is no need to return facets for items
without any views or downloads. Using facet.mincount=1 reduces the
result set size and also means we can store less data in the data-
base. The API returns HTTP 404 Not Found if an item is not in the
database anyways.
I can't figure it out exactly, but there is some weird issue with
Solr's facet results when you don't use facet.mincount=1. For some
reason you get tons of results with an id that doesn't even exist
in the document database, let alone as an actual DSpace item!
See: https://lucene.apache.org/solr/guide/6_6/the-stats-component.html
You can test OnCalendar strings using systemd-analyze calendar, eg:
# systemd-analyze calendar '*-*-* 06:00:00,18:00:00'
Failed to parse calendar specification '*-*-* 06:00:00,18:00:00':
Invalid argument
# systemd-analyze calendar '*-*-* 06,18:00:00'
Normalized form: *-*-* 06,18:00:00
Next elapse: Wed 2018-09-26 06:00:00 EEST
(in UTC): Wed 2018-09-26 03:00:00 UTC
From now: 6h left