1
0
mirror of https://github.com/ilri/dspace-statistics-api.git synced 2025-05-10 07:06:01 +02:00

Compare commits

...

75 Commits

Author SHA1 Message Date
15c3299b99 CHANGELOG.md: Add changes for v0.6.0 2018-10-31 19:26:45 +02:00
d36be5ee50 contrib: Update systemd unit files for refactor 2018-10-28 11:14:21 +02:00
2f45d27554 dspace_statistics_api/app.py: remove unused code
This was added accidentally when I refactored. I was trying to see
if I could use Falcon's on_exit() hook.
2018-10-28 11:14:21 +02:00
b8356f7a87 Add "application" alias to API object
By default gunicorn looks for an "application" object to run, so this
saves us having to type api:app.
2018-10-28 11:14:21 +02:00
2136dc79ce Remove shebang from indexer.py
This is run as a Python module now so does not need a shebang.
2018-10-28 11:14:21 +02:00
ed60120cef Remove executable bit from indexer.py
Now it is run as a Python module.
2018-10-28 11:14:21 +02:00
c027f01b48 Refactor project structure
This follows guidance from several well-known Python best practices
guides. Basically, the idea is create a package for the application
that is comprised of several re-usable modules.

See: https://docs.python-guide.org/writing/structure/
See: https://realpython.com/python-application-layouts/
2018-10-28 11:14:21 +02:00
754663f062 CHANGELOG.md: Add changes for version 0.5.2 2018-10-28 11:12:27 +02:00
507699e58a requirements.txt: Update libraries
Switch to a personal fork of SolrClient so that we can use kazoo 2.5.0
and get rid of the error about the 'async' keyword on Python 3.7. Also
this bumps some of the other libraries to their latest versions.
2018-10-28 11:09:47 +02:00
a016916995 CHANGELOD.md: Add note about ujson 2018-10-24 14:15:03 +03:00
6fd2827a7c Use Python's native json instead of ujson
Falcon can optionally use ujson to speed up JSON (de)serialization,
but Falcon's already really fast and requiring ujson actually makes
deployment trickier in some cases (for example in Docker containers
that are based on Alpine Linux).

Here are some tests of Falcon 1.4.1 on Python 3.5 from my laptop:

    1. falcon...............60172 req/sec or 16.62 μs/req (36x)
    2. falcon-ext...........34186 req/sec or 29.25 μs/req (20x)
    3. bottle...............32924 req/sec or 30.37 μs/req (20x)
    4. werkzeug.............11948 req/sec or 83.70 μs/req (7x)
    5. flask.................6654 req/sec or 150.30 μs/req (4x)
    6. django................4565 req/sec or 219.04 μs/req (3x)
    7. pecan.................1672 req/sec or 598.19 μs/req (1x)

The tests were conducted with Falcon's official Docker benchmarking
tools on my Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz on Arch Linux.

See: https://github.com/falconry/falcon/tree/master/docker
2018-10-24 14:08:23 +03:00
62142eb79e CHANGELOG.md: Move unreleased changes to v0.5.0 2018-10-24 12:02:42 +03:00
fda0321942 CHANGELOG.md: Add note about Solr in API component 2018-10-24 12:01:47 +03:00
963aa245c8 app.py: Don't initialize Solr connection
We only need Solr in the indexing component, not for the API itself.
2018-10-24 11:59:50 +03:00
568ff2eebb CHANGELOG.md: Add note about nginx configuration 2018-10-23 14:56:44 +03:00
deecb8a10b README.md: Add example nginx configuration 2018-10-23 14:55:36 +03:00
12f45d7c08 contrib: Adjust example path 2018-10-23 14:34:29 +03:00
f65089f9ce CHANGELOG.md: Update and move to 0.4.3 release 2018-10-17 09:51:44 +03:00
1db5cf1c29 README.md: Grammar 2018-10-17 09:51:35 +03:00
e581c4b1aa README.md: Improve documentation 2018-10-17 09:50:30 +03:00
e8d356c9ca README.md: Add TODO about Python 3.6+ f-string syntax
They are faster.
2018-10-17 09:13:25 +03:00
34a9b8d629 CHANGELOG.md: Add unreleased changes for Travis CI 2018-10-14 19:02:09 +03:00
41e3d66a0e .travis.yml: Only build master branch 2018-10-14 19:00:31 +03:00
9b2a6137b4 README.md: Add Travis CI badge
For now this is only an indicator that the Python requirements can
be satisfied and installed.
2018-10-14 18:58:12 +03:00
600b986f99 .travis.yml: Use Python 3.7-dev instead of 3.7
I don't think Travis supports Python 3.7 yet because the builds for
that version keep failing.
2018-10-14 18:57:30 +03:00
49a7790794 .travis.yml: Move script to one line 2018-10-14 18:53:45 +03:00
f2deba627c .travis.yml: Run pip install as script
Basically for now there are no tests so I just want to just check
that requirements.txt is correct and that all dependencies can be
installed.
2018-10-14 18:47:14 +03:00
9323513794 README.md: Update instructions 2018-10-14 18:45:40 +03:00
daf15610f2 CHANGELOG.md: Update changes and move to 0.4.2 2018-10-05 00:19:18 +03:00
4ede966dbb indexer.py: Fix logic error in SQL insert
This was inserting correctly on the first run, but subsequent runs
were inserting into the incorrect column on conflict. This made it
seem like there were downloads for items where there were none.
2018-10-05 00:16:24 +03:00
3580473a6d README.md: Add TODO about JSON in PostgreSQL 2018-10-03 20:08:18 +03:00
071c24535f README.md: Add TODO about API versions 2018-10-03 11:12:18 +03:00
4291aecac4 README.md: Formatting 2018-09-27 12:45:15 +03:00
46bf537e88 CHANGELOG.md: Add note about cursor change 2018-09-27 11:08:42 +03:00
eaca5354d3 app.py: Iterate directly on cursor
We don't need to create an intermediate variable for the results of
the SQL query because psycopg2's cursor is iterable.

See: http://initd.org/psycopg/docs/cursor.html
2018-09-27 11:03:44 +03:00
4600288ee4 CHANGELOG.md: Add note about ujson 2018-09-27 09:53:42 +03:00
8179563378 requirements.txt: pip freeze 2018-09-27 09:53:16 +03:00
b14c3eef4d indexer.py: Use ujson instead of json
Falcon optionally makes use of the ujson library to speed up media
(de)serialization, error serialization, and query string parsing.

See: https://falcon.readthedocs.io/en/stable/user/install.html
2018-09-27 09:51:40 +03:00
71a789b13f CHANGELOG.md. Add unreleased changes 2018-09-27 09:30:48 +03:00
c68ddacaa4 README.md: Add note about systemd units for deployment 2018-09-27 09:26:47 +03:00
9c9e79769e README.md: Add TODO 2018-09-27 09:17:45 +03:00
2ad5ade556 README.md: Improve introduction 2018-09-27 09:12:52 +03:00
7412a09670 README.md: Improve introduction 2018-09-27 09:07:28 +03:00
bb744a00b8 README.md: Add requirements 2018-09-27 08:57:27 +03:00
7499b89d99 CHANGELOG.md: Move unreleased changes to v0.4.1 2018-09-27 08:15:54 +03:00
2c1e4952b1 indexer.py: Remove comment
I had left this there so I could remember how to get the number of
facets, but I don't need it anymore.
2018-09-26 23:27:48 +03:00
379f202c3f CHANGELOG.md: Add unreleased changes 2018-09-26 23:26:48 +03:00
560fa6056d README.md: Remove batch inserts from TODO 2018-09-26 23:25:35 +03:00
385a34e5d0 indexer.py: Use psycopg2's execute_values to batch inserts
Batch inserts are much faster than a series of individual inserts
because they drastically reduce the overhead caused by round-trip
communication with the server. My tests in development confirm:

  - cursor.execute(): 19 seconds
  - execute_values(): 14 seconds

I'm currently only working with 4,500 rows, but I will experiment
with larger data sets, as well as larger batches. For example, on
the PostgreSQL mailing list a user reports doing 10,000 rows with
a page size of 100.

See: http://initd.org/psycopg/docs/extras.html#psycopg2.extras.execute_values
See: https://github.com/psycopg/psycopg2/issues/491#issuecomment-276551038
2018-09-26 23:10:29 +03:00
d0ea62d2bd database.py: Use one line for psycopg2 imports 2018-09-26 22:23:24 +03:00
366ae25b8e README.md: Add link to psycopg2 issue about batch inserts 2018-09-26 22:23:08 +03:00
0f3054ae03 README.md: Add TODO about batch DB inserts 2018-09-26 16:31:13 +03:00
6bf34235d4 CHANGELOG.md: Move unreleased changes to version 0.4.0 2018-09-26 02:51:27 +03:00
e604d8ca81 indexer.py: Major refactor
Basically Solr's numFound has nothing to do with the actual number
of distinct facets that are returned. You need to use Solr's stats
component to get the number of distinct facets, aka countDistinct.
This is apparently deprecated in newer Solr versions, but we're on
version 4.10 and it works there.

Also, I realized that there is no need to return facets for items
without any views or downloads. Using facet.mincount=1 reduces the
result set size and also means we can store less data in the data-
base. The API returns HTTP 404 Not Found if an item is not in the
database anyways.

I can't figure it out exactly, but there is some weird issue with
Solr's facet results when you don't use facet.mincount=1. For some
reason you get tons of results with an id that doesn't even exist
in the document database, let alone as an actual DSpace item!

See: https://lucene.apache.org/solr/guide/6_6/the-stats-component.html
2018-09-26 02:41:10 +03:00
fc35b816f3 CHANGELOG.md: Add unreleased changes 2018-09-25 23:09:44 +03:00
9e6a2f7559 contrib/dspace-statistics-indexer.timer: Fix syntax
You can test OnCalendar strings using systemd-analyze calendar, eg:

    # systemd-analyze calendar '*-*-* 06:00:00,18:00:00'
    Failed to parse calendar specification '*-*-* 06:00:00,18:00:00':
    Invalid argument
    # systemd-analyze calendar '*-*-* 06,18:00:00'
    Normalized form: *-*-* 06,18:00:00
        Next elapse: Wed 2018-09-26 06:00:00 EEST
           (in UTC): Wed 2018-09-26 03:00:00 UTC
           From now: 6h left
2018-09-25 23:07:03 +03:00
46cfc3ffbc CHANGELOG.md: Release version 0.3.2 2018-09-25 13:14:08 +03:00
2850035a4c Return HTTP 404 when an item id is not found 2018-09-25 13:12:53 +03:00
c0b550109a README.md: Improve wording 2018-09-25 12:24:52 +03:00
bfceffd84d indexer.py: Improve inline documentation 2018-09-25 12:23:31 +03:00
d0552f5047 CHANGELOG.md: Move unreleased changes to version 0.3.1 2018-09-25 12:18:26 +03:00
c3a0bf7f44 CHANGELOG.md: Add Python 3.7 to Travis CI config 2018-09-25 12:17:49 +03:00
6e47e9c9ee .travis.yml: Add Python 3.7 2018-09-25 12:17:20 +03:00
cd90d618d6 CHANGELOG.md: Fix error in old release 2018-09-25 12:17:01 +03:00
280d211d56 CHANGELOG.md: Add note about kazoo 2.5.0 2018-09-25 12:12:10 +03:00
806d63137f requirements.txt: Use kazoo 2.5.0
SolrClient 0.2.1 currently depends on kazoo 2.2.1, but there is an
issue with Python 3.7 in kazoo <= 2.5.0. Kazoo 2.5.0 fixes the is-
sue with Python 3.7, and for my limited usage of SolrClient it se-
ems to work fine.

See: https://github.com/moonlitesolutions/SolrClient/issues/79
2018-09-25 12:08:28 +03:00
f7c7390e4f README.md: Add note about Python 3.7 2018-09-25 12:07:58 +03:00
702724e8a4 CHANGELOG.md: Move unreleased changes to version 0.3.0 2018-09-25 11:38:36 +03:00
36818d03ef CHANGELOG.md: Update unreleased changes 2018-09-25 11:37:56 +03:00
4cf8656b35 Change / route to /items
I think it's more obvious if the "all items" route is plural. Also,
this will allow me to eventually put documentation at the root.
2018-09-25 11:34:07 +03:00
f30a464cd1 README.md: Add notes about API endpoints 2018-09-25 11:28:12 +03:00
93ae12e313 README.md: Update introduction 2018-09-25 11:15:12 +03:00
dc978e9333 CHANGELOG.md: Add note about requirements.txt and Travis CI 2018-09-25 11:09:02 +03:00
295436fea0 Add .travis.yml 2018-09-25 11:08:01 +03:00
46a1476ab0 Add requirements.txt
Generated with `pip freeze`. This is so I can pin the versions of
packages that I've tested with as well as to allow Travis to test
whether the project runs on various Pythons and to let GitHub in-
form me of vulnerabilities in some libraries.
2018-09-25 11:02:50 +03:00
14 changed files with 358 additions and 184 deletions

11
.travis.yml Normal file
View File

@ -0,0 +1,11 @@
language: python
python:
- "3.5"
- "3.6"
- "3.7-dev"
script: pip install -r requirements.txt
branches:
only:
- master
# vim: ts=2 sw=2 et

View File

@ -4,6 +4,68 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
### [0.6.0] - 2018-10-31
## Changed
- Refactor project structure (note breaking changes to API and indexing invocation, see contrib and README.md)
### [0.5.2] - 2018-10-28
## Changed
- Update library versions in requirements.txt
### [0.5.1] - 2018-10-24
## Changed
- Use Python's native json instead of ujson
### [0.5.0] - 2018-10-24
## Added
- Example nginx configuration to README.md
## Changed
- Don't initialize Solr connection in API
### [0.4.3] - 2018-10-17
## Changed
- Use pip install as script for Travis CI
## Improved
- Documentation for deployment and testing
## [0.4.2] - 2018-10-04
### Changed
- README.md introduction and requirements
- Use ujson instead of json
- Iterate directly on SQL cursor in `/items` route
### Fixed
- Logic error in SQL for item views
## [0.4.1] - 2018-09-26
### Changed
- Use execute_values() to batch insert records to PostgreSQL
## [0.4.0] - 2018-09-25
### Fixed
- Invalid OnCalendar syntax in dspace-statistics-indexer.timer
- Major logic error in indexer.py
## [0.3.2] - 2018-09-25
## Changed
- /item/id route now returns HTTP 404 if an item is not found
## [0.3.1] - 2018-09-25
### Changed
- Force SolrClient's kazoo dependency to version 2.5.0 to work with Python 3.7
- Add Python 3.7 to Travis CI configuration
## [0.3.0] - 2018-09-25
### Added
- requirements.txt for pip
- Travis CI build configuration for Python 3.5 and 3.6
- Documentation on using the API
### Changed
- The "all items" route from / to /items
## [0.2.1] - 2018-09-24
### Changed
- Environment settings in example systemd unit files

View File

@ -1,22 +1,79 @@
# DSpace Statistics API
A quick and dirty REST API to expose Solr view and download statistics for items in a DSpace repository.
# DSpace Statistics API [![Build Status](https://travis-ci.org/alanorth/dspace-statistics-api.svg?branch=master)](https://travis-ci.org/alanorth/dspace-statistics-api)
A simple REST API to expose Solr view and download statistics for items in a DSpace repository. This project contains a standalone indexing component and a WSGI application.
Written and tested in Python 3.6. SolrClient (0.2.1) does not currently run in Python 3.7.0. Requires PostgreSQL version 9.5 or greater for [`UPSERT` support](https://wiki.postgresql.org/wiki/UPSERT).
## Requirements
## Installation
Create a virtual environment and run it:
- Python 3.5+
- PostgreSQL version 9.5+ (due to [`UPSERT` support](https://wiki.postgresql.org/wiki/UPSERT))
- DSpace 4+ with [Solr usage statistics enabled](https://wiki.duraspace.org/display/DSDOC5x/SOLR+Statistics)
$ virtualenv -p /usr/bin/python3.6 venv
## Installation and Testing
Create a Python virtual environment and install the dependencies:
$ python -m venv venv
$ . venv/bin/activate
$ pip install falcon gunicorn SolrClient psycopg2-binary
$ gunicorn app:api
$ pip install -r requirements.txt
Set up the environment variables for Solr and PostgreSQL:
$ export SOLR_SERVER=http://localhost:8080/solr
$ export DATABASE_NAME=dspacestatistics
$ export DATABASE_USER=dspacestatistics
$ export DATABASE_PASS=dspacestatistics
$ export DATABASE_HOST=localhost
Index the Solr statistics core to populate the PostgreSQL database:
$ python -m dspace_statistics_api.indexer
Run the REST API:
$ gunicorn dspace_statistics_api.app
Test to see if there are any statistics:
$ curl 'http://localhost:8000/items?limit=1'
## Deployment
There are example systemd service and timer units in the `contrib` directory. The API service listens on localhost by default so you will need to expose it publicly using a web server like nginx.
An example nginx configuration is:
```
server {
#...
location ~ /rest/statistics/?(.*) {
access_log /var/log/nginx/statistics.log;
proxy_pass http://statistics_api/$1$is_args$args;
}
}
upstream statistics_api {
server 127.0.0.1:5000;
}
```
This would expose the API at `/rest/statistics`.
## Using the API
The API exposes the following endpoints:
- GET `/items`return views and downloads for all items that Solr knows about¹. Accepts `limit` and `page` query parameters for pagination of results.
- GET `/item/id`return views and downloads for a single item (*id* must be a positive integer). Returns HTTP 404 if an item id is not found.
¹ We are querying the Solr statistics core, which technically only knows about items that have either views or downloads.
## Todo
- Add API documentation
- Close up DB connection when gunicorn shuts down gracefully
- Close DB connection when gunicorn shuts down gracefully
- Better logging
- Return HTTP 404 when item_id is nonexistent
- Tests
- Check if database exists (try/except)
- Version API
- Use JSON in PostgreSQL
- Switch to [Python 3.6+ f-string syntax](https://realpython.com/python-f-strings/)
## License
This work is licensed under the [GPLv3](https://www.gnu.org/licenses/gpl-3.0.en.html).

View File

@ -9,10 +9,10 @@ Environment=DATABASE_PASS=dspacestatistics
Environment=DATABASE_HOST=localhost
User=nobody
Group=nogroup
WorkingDirectory=/opt/ilri/dspace-statistics-api
ExecStart=/opt/ilri/dspace-statistics-api/venv/bin/gunicorn \
WorkingDirectory=/var/lib/dspace-statistics-api
ExecStart=/var/lib/dspace-statistics-api/venv/bin/gunicorn \
--bind 127.0.0.1:5000 \
app:api
dspace_statistics_api.app
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID

View File

@ -10,8 +10,8 @@ Environment=DATABASE_PASS=dspacestatistics
Environment=DATABASE_HOST=localhost
User=nobody
Group=nogroup
WorkingDirectory=/opt/ilri/dspace-statistics-api
ExecStart=/opt/ilri/dspace-statistics-api/venv/bin/python indexer.py
WorkingDirectory=/var/lib/dspace-statistics-api
ExecStart=/var/lib/dspace-statistics-api/venv/bin/python -m dspace_statistics_api.indexer
[Install]
WantedBy=multi-user.target

View File

@ -3,7 +3,7 @@ Description=DSpace Statistics Indexer
[Timer]
# twice a day, at 6AM and 6PM
OnCalendar=*-*-* 06:00:00,18:00:00
OnCalendar=*-*-* 06,18:00:00
# Add a random delay of 03600 seconds
RandomizedDelaySec=3600
Persistent=true

View File

View File

@ -1,10 +1,8 @@
from database import database_connection
from .database import database_connection
import falcon
from solr import solr_connection
db = database_connection()
db.set_session(readonly=True)
solr = solr_connection()
class AllItemsResource:
def on_get(self, req, resp):
@ -22,16 +20,16 @@ class AllItemsResource:
# get statistics, ordered by id, and use limit and offset to page through results
cursor.execute('SELECT id, views, downloads FROM items ORDER BY id ASC LIMIT {} OFFSET {}'.format(limit, offset))
results = cursor.fetchmany(limit)
cursor.close()
# create a list to hold dicts of item stats
statistics = list()
# iterate over results and build statistics object
for item in results:
for item in cursor:
statistics.append({ 'id': item['id'], 'views': item['views'], 'downloads': item['downloads'] })
cursor.close()
message = {
'currentPage': page,
'totalPages': pages,
@ -47,19 +45,26 @@ class ItemResource:
cursor = db.cursor()
cursor.execute('SELECT views, downloads FROM items WHERE id={}'.format(item_id))
results = cursor.fetchone()
if cursor.rowcount == 0:
raise falcon.HTTPNotFound(
title='Item not found',
description='The item with id "{}" was not found.'.format(item_id)
)
else:
results = cursor.fetchone()
statistics = {
'id': item_id,
'views': results['views'],
'downloads': results['downloads']
}
resp.media = statistics
cursor.close()
statistics = {
'id': item_id,
'views': results['views'],
'downloads': results['downloads']
}
resp.media = statistics
api = falcon.API()
api.add_route('/', AllItemsResource())
api = application = falcon.API()
api.add_route('/items', AllItemsResource())
api.add_route('/item/{item_id:int}', ItemResource())
# vim: set sw=4 ts=4 expandtab:

View File

@ -1,9 +1,8 @@
from config import DATABASE_NAME
from config import DATABASE_USER
from config import DATABASE_PASS
from config import DATABASE_HOST
import psycopg2
import psycopg2.extras
from .config import DATABASE_NAME
from .config import DATABASE_USER
from .config import DATABASE_PASS
from .config import DATABASE_HOST
import psycopg2, psycopg2.extras
def database_connection():
connection = psycopg2.connect("dbname={} user={} password={} host='{}'".format(DATABASE_NAME, DATABASE_USER, DATABASE_PASS, DATABASE_HOST), cursor_factory=psycopg2.extras.DictCursor)

View File

@ -0,0 +1,172 @@
#
# indexer.py
#
# Copyright 2018 Alan Orth.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# ---
#
# Connects to a DSpace Solr statistics core and ingests item views and downloads
# into a PostgreSQL database for use by other applications (like an API).
#
# This script is written for Python 3.5+ and requires several modules that you
# can install with pip (I recommend using a Python virtual environment):
#
# $ pip install SolrClient psycopg2-binary
#
# See: https://solrclient.readthedocs.io/en/latest/SolrClient.html
# See: https://wiki.duraspace.org/display/DSPACE/Solr
from .database import database_connection
import json
import psycopg2.extras
from .solr import solr_connection
def index_views():
# get total number of distinct facets for items with a minimum of 1 view,
# otherwise Solr returns all kinds of weird ids that are actually not in
# the database. Also, stats are expensive, but we need stats.calcdistinct
# so we can get the countDistinct summary.
#
# see: https://lucene.apache.org/solr/guide/6_6/the-stats-component.html
res = solr.query('statistics', {
'q':'type:2',
'fq':'isBot:false AND statistics_type:view',
'facet':True,
'facet.field':'id',
'facet.mincount':1,
'facet.limit':1,
'facet.offset':0,
'stats':True,
'stats.field':'id',
'stats.calcdistinct':True
}, rows=0)
# get total number of distinct facets (countDistinct)
results_totalNumFacets = json.loads(res.get_json())['stats']['stats_fields']['id']['countDistinct']
# divide results into "pages" (cast to int to effectively round down)
results_per_page = 100
results_num_pages = int(results_totalNumFacets / results_per_page)
results_current_page = 0
cursor = db.cursor()
# create an empty list to store values for batch insertion
data = []
while results_current_page <= results_num_pages:
print('Indexing item views (page {} of {})'.format(results_current_page, results_num_pages))
res = solr.query('statistics', {
'q':'type:2',
'fq':'isBot:false AND statistics_type:view',
'facet':True,
'facet.field':'id',
'facet.mincount':1,
'facet.limit':results_per_page,
'facet.offset':results_current_page * results_per_page
}, rows=0)
# SolrClient's get_facets() returns a dict of dicts
views = res.get_facets()
# in this case iterate over the 'id' dict and get the item ids and views
for item_id, item_views in views['id'].items():
data.append((item_id, item_views))
# do a batch insert of values from the current "page" of results
sql = 'INSERT INTO items(id, views) VALUES %s ON CONFLICT(id) DO UPDATE SET views=excluded.views'
psycopg2.extras.execute_values(cursor, sql, data, template='(%s, %s)')
db.commit()
# clear all items from the list so we can populate it with the next batch
data.clear()
results_current_page += 1
cursor.close()
def index_downloads():
# get the total number of distinct facets for items with at least 1 download
res = solr.query('statistics', {
'q':'type:0',
'fq':'isBot:false AND statistics_type:view AND bundleName:ORIGINAL',
'facet':True,
'facet.field':'owningItem',
'facet.mincount':1,
'facet.limit':1,
'facet.offset':0,
'stats':True,
'stats.field':'owningItem',
'stats.calcdistinct':True
}, rows=0)
# get total number of distinct facets (countDistinct)
results_totalNumFacets = json.loads(res.get_json())['stats']['stats_fields']['owningItem']['countDistinct']
# divide results into "pages" (cast to int to effectively round down)
results_per_page = 100
results_num_pages = int(results_totalNumFacets / results_per_page)
results_current_page = 0
cursor = db.cursor()
# create an empty list to store values for batch insertion
data = []
while results_current_page <= results_num_pages:
print('Indexing item downloads (page {} of {})'.format(results_current_page, results_num_pages))
res = solr.query('statistics', {
'q':'type:0',
'fq':'isBot:false AND statistics_type:view AND bundleName:ORIGINAL',
'facet':True,
'facet.field':'owningItem',
'facet.mincount':1,
'facet.limit':results_per_page,
'facet.offset':results_current_page * results_per_page
}, rows=0)
# SolrClient's get_facets() returns a dict of dicts
downloads = res.get_facets()
# in this case iterate over the 'owningItem' dict and get the item ids and downloads
for item_id, item_downloads in downloads['owningItem'].items():
data.append((item_id, item_downloads))
# do a batch insert of values from the current "page" of results
sql = 'INSERT INTO items(id, downloads) VALUES %s ON CONFLICT(id) DO UPDATE SET downloads=excluded.downloads'
psycopg2.extras.execute_values(cursor, sql, data, template='(%s, %s)')
db.commit()
# clear all items from the list so we can populate it with the next batch
data.clear()
results_current_page += 1
cursor.close()
db = database_connection()
solr = solr_connection()
# create table to store item views and downloads
cursor = db.cursor()
cursor.execute('''CREATE TABLE IF NOT EXISTS items
(id INT PRIMARY KEY, views INT DEFAULT 0, downloads INT DEFAULT 0)''')
index_views()
index_downloads()
db.close()
# vim: set sw=4 ts=4 expandtab:

View File

@ -1,4 +1,4 @@
from config import SOLR_SERVER
from .config import SOLR_SERVER
from SolrClient import SolrClient
def solr_connection():

View File

@ -1,144 +0,0 @@
#!/usr/bin/env python
#
# indexer.py
#
# Copyright 2018 Alan Orth.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# ---
#
# Connects to a DSpace Solr statistics core and ingests item views and downloads
# into a Postgres database for use with other applications (an API, for example).
#
# This script is written for Python 3 and requires several modules that you can
# install with pip (I recommend setting up a Python virtual environment first):
#
# $ pip install SolrClient
#
# See: https://solrclient.readthedocs.io/en/latest/SolrClient.html
# See: https://wiki.duraspace.org/display/DSPACE/Solr
#
# Tested with Python 3.5 and 3.6.
from database import database_connection
from solr import solr_connection
def index_views():
print("Populating database with item views.")
# determine the total number of items with views (aka Solr's numFound)
res = solr.query('statistics', {
'q':'type:2',
'fq':'isBot:false AND statistics_type:view',
'facet':True,
'facet.field':'id',
}, rows=0)
# divide results into "pages" (numFound / 100)
results_numFound = res.get_num_found()
results_per_page = 100
results_num_pages = round(results_numFound / results_per_page)
results_current_page = 0
cursor = db.cursor()
while results_current_page <= results_num_pages:
print('Page {} of {}.'.format(results_current_page, results_num_pages))
res = solr.query('statistics', {
'q':'type:2',
'fq':'isBot:false AND statistics_type:view',
'facet':True,
'facet.field':'id',
'facet.limit':results_per_page,
'facet.offset':results_current_page * results_per_page
})
# make sure total number of results > 0
if res.get_num_found() > 0:
# SolrClient's get_facets() returns a dict of dicts
views = res.get_facets()
# in this case iterate over the 'id' dict and get the item ids and views
for item_id, item_views in views['id'].items():
cursor.execute('''INSERT INTO items(id, views) VALUES(%s, %s)
ON CONFLICT(id) DO UPDATE SET downloads=excluded.views''',
(item_id, item_views))
db.commit()
results_current_page += 1
cursor.close()
def index_downloads():
print("Populating database with item downloads.")
# determine the total number of items with downloads (aka Solr's numFound)
res = solr.query('statistics', {
'q':'type:0',
'fq':'isBot:false AND statistics_type:view AND bundleName:ORIGINAL',
'facet':True,
'facet.field':'owningItem',
}, rows=0)
# divide results into "pages" (numFound / 100)
results_numFound = res.get_num_found()
results_per_page = 100
results_num_pages = round(results_numFound / results_per_page)
results_current_page = 0
cursor = db.cursor()
while results_current_page <= results_num_pages:
print('Page {} of {}.'.format(results_current_page, results_num_pages))
res = solr.query('statistics', {
'q':'type:0',
'fq':'isBot:false AND statistics_type:view AND bundleName:ORIGINAL',
'facet':True,
'facet.field':'owningItem',
'facet.limit':results_per_page,
'facet.offset':results_current_page * results_per_page
})
# make sure total number of results > 0
if res.get_num_found() > 0:
# SolrClient's get_facets() returns a dict of dicts
downloads = res.get_facets()
# in this case iterate over the 'owningItem' dict and get the item ids and downloads
for item_id, item_downloads in downloads['owningItem'].items():
cursor.execute('''INSERT INTO items(id, downloads) VALUES(%s, %s)
ON CONFLICT(id) DO UPDATE SET downloads=excluded.downloads''',
(item_id, item_downloads))
db.commit()
results_current_page += 1
cursor.close()
db = database_connection()
solr = solr_connection()
# create table to store item views and downloads
cursor = db.cursor()
cursor.execute('''CREATE TABLE IF NOT EXISTS items
(id INT PRIMARY KEY, views INT DEFAULT 0, downloads INT DEFAULT 0)''')
index_views()
index_downloads()
db.close()
# vim: set sw=4 ts=4 expandtab:

12
requirements.txt Normal file
View File

@ -0,0 +1,12 @@
certifi==2018.10.15
chardet==3.0.4
falcon==1.4.1
gunicorn==19.9.0
idna==2.7
kazoo==2.5.0
psycopg2-binary==2.7.5
python-mimeparse==1.6.0
requests==2.20.0
six==1.11.0
-e git://github.com/alanorth/SolrClient.git@c629e3475be37c82770b2be61748be7e29882648#egg=SolrClient
urllib3==1.24