1
0
mirror of https://github.com/ilri/dspace-statistics-api.git synced 2025-05-10 15:16:02 +02:00

Compare commits

..

47 Commits

Author SHA1 Message Date
46cfc3ffbc CHANGELOG.md: Release version 0.3.2 2018-09-25 13:14:08 +03:00
2850035a4c Return HTTP 404 when an item id is not found 2018-09-25 13:12:53 +03:00
c0b550109a README.md: Improve wording 2018-09-25 12:24:52 +03:00
bfceffd84d indexer.py: Improve inline documentation 2018-09-25 12:23:31 +03:00
d0552f5047 CHANGELOG.md: Move unreleased changes to version 0.3.1 2018-09-25 12:18:26 +03:00
c3a0bf7f44 CHANGELOG.md: Add Python 3.7 to Travis CI config 2018-09-25 12:17:49 +03:00
6e47e9c9ee .travis.yml: Add Python 3.7 2018-09-25 12:17:20 +03:00
cd90d618d6 CHANGELOG.md: Fix error in old release 2018-09-25 12:17:01 +03:00
280d211d56 CHANGELOG.md: Add note about kazoo 2.5.0 2018-09-25 12:12:10 +03:00
806d63137f requirements.txt: Use kazoo 2.5.0
SolrClient 0.2.1 currently depends on kazoo 2.2.1, but there is an
issue with Python 3.7 in kazoo <= 2.5.0. Kazoo 2.5.0 fixes the is-
sue with Python 3.7, and for my limited usage of SolrClient it se-
ems to work fine.

See: https://github.com/moonlitesolutions/SolrClient/issues/79
2018-09-25 12:08:28 +03:00
f7c7390e4f README.md: Add note about Python 3.7 2018-09-25 12:07:58 +03:00
702724e8a4 CHANGELOG.md: Move unreleased changes to version 0.3.0 2018-09-25 11:38:36 +03:00
36818d03ef CHANGELOG.md: Update unreleased changes 2018-09-25 11:37:56 +03:00
4cf8656b35 Change / route to /items
I think it's more obvious if the "all items" route is plural. Also,
this will allow me to eventually put documentation at the root.
2018-09-25 11:34:07 +03:00
f30a464cd1 README.md: Add notes about API endpoints 2018-09-25 11:28:12 +03:00
93ae12e313 README.md: Update introduction 2018-09-25 11:15:12 +03:00
dc978e9333 CHANGELOG.md: Add note about requirements.txt and Travis CI 2018-09-25 11:09:02 +03:00
295436fea0 Add .travis.yml 2018-09-25 11:08:01 +03:00
46a1476ab0 Add requirements.txt
Generated with `pip freeze`. This is so I can pin the versions of
packages that I've tested with as well as to allow Travis to test
whether the project runs on various Pythons and to let GitHub in-
form me of vulnerabilities in some libraries.
2018-09-25 11:02:50 +03:00
87dbb6c4df CHANGELOG.md: Release version 0.2.1 2018-09-25 02:21:44 +03:00
3160c44566 app.py: Remove comment
This comment was added when I first began the application and the
testing status is documented in the README now.
2018-09-25 02:20:51 +03:00
4b72f626d9 Update string substitution format
Instead of doing numbered strings I will just depend on the order,
at least to be consistent.
2018-09-25 02:19:29 +03:00
2d3b7620e3 CHANGELOG.md: Add note about psycopg2.extras.DictCursor 2018-09-25 02:08:54 +03:00
6e4bc630f7 database.py: Use psycopg2.extras.DictCursor
This allows us to access records using their column name. I didn't
notice that this was not working, as I had been testing the wrong
server!

See: http://initd.org/psycopg/docs/extras.html
2018-09-25 02:06:29 +03:00
44884140e5 CHANGELOG.md: Add new unreleased changes 2018-09-25 01:11:37 +03:00
74ff86ee3b contrib: Update environment settings in system units 2018-09-25 01:10:14 +03:00
3327884f21 Update docs to remove SQLite stuff
I've decided to use PostgreSQL instead of SQLite because the UPSERT
support is available in versions of PostgreSQL we're alread running,
whereas SQLite needs a VERY new (3.24.0) version that is not avail-
able on any recent long-term support Ubuntu releases.
2018-09-25 00:56:01 +03:00
8f7450f67a Use PostgreSQL instead of SQLite
I was very surprised how easy and fast and robust SQLite was, but in
the end I realized that its UPSERT support only came in version 3.24
and both Ubuntu 16.04 and 18.04 have older versions than that! I did
manage to install libsqlite3-0 from Ubuntu 18.04 cosmic on my xenial
host, but that feels dirty.

PostgreSQL has support for UPSERT since 9.5, not to mention the same
nice LIMIT and OFFSET clauses.
2018-09-25 00:49:47 +03:00
28d61fb041 README.md: Add notes about Python and SQLite versions 2018-09-24 17:26:48 +03:00
cbc98991b4 CHANGELOG.md: Move unreleased notes to version 0.1.0 2018-09-24 16:14:14 +03:00
6c28be0463 README.md: Add note about route for all items 2018-09-24 16:13:26 +03:00
42e8f17305 CHANGELOG.md: Add note about route for all items 2018-09-24 16:13:05 +03:00
19a45f3f6f app.py: Add route to page through all item statistics
This route exposes all item statistics and uses the limit and offset
parameters to control paging throug the result set. The logic here
is extremely easy thanks to the brilliant LIMIT and OFFSET features
of SQLite (of course the SQL query sorts the results by some unique
field to ensure the order is already the same).
2018-09-24 16:07:26 +03:00
505ef31101 CHANGELOG.md: Add note about UPSERT 2018-09-24 14:31:05 +03:00
1543cacc54 app.py: Update SQL logic to use single table
The indexer.py script was updated to use a single table because I
learned about UPSERT. This simplifies the database schema and the
Python logic, and makes it easier to page all views and downloads
at once without complicated JOIN queries.
2018-09-24 14:28:00 +03:00
2cab456f16 indexer.py: Use single items table with UPSERT
I was using two separate tables for item views and downloads without
realizing that SQLite didn't support FULL OUTER JOIN, which would be
needed to get views and downloads for a given item in a single query.

Instead I can use one table with a default value of 0 for both views
and downloads, and then use "UPSERT" to populate the statistics. This
is a newish SQL concept that allows you to attempt an INSERT and then
specify an action to perform in case of conflict. This works well in
SQLite and actually simplifies my Python logic greatly!

Note that the "excluded" table qualifier is a special keyword that
allows you to reference the value that would have been inserted.

See: https://www.sqlite.org/lang_UPSERT.html
2018-09-24 14:19:50 +03:00
53615dea2d indexer.py: Add license and documentation 2018-09-24 09:18:50 +03:00
2d8d1e6833 README.md: Add TODO for nonexistent items 2018-09-24 00:48:02 +03:00
e26e595ea1 README.md: Add more TODOs 2018-09-24 00:35:00 +03:00
a9151b5bbf CHANGELOG.md: Update unreleased notes 2018-09-24 00:30:58 +03:00
76833d6f5f contrib: Update some old CGSpace references to DSpace 2018-09-24 00:30:26 +03:00
a51422273c Remove SOLR_CORE configuration variable
This parameter is not customizable. All DSpace instances use this
name for the Solr statistics core.
2018-09-24 00:20:54 +03:00
89621af85d Split database access into RW and RO
The indexer need to be able to write to the database, but the API only
needs to read it.
2018-09-24 00:00:05 +03:00
c554404d7f CHANGELOG.md: Add systemd units for indexer 2018-09-23 23:15:27 +03:00
90d7a452bd contrib: Add systemd units for indexer
An example systemd service unit for the indexer and an accompanying
timer unit.
2018-09-23 23:13:43 +03:00
431a1c9d64 CHANGELOG.md: Add unreleased changes 2018-09-23 23:04:01 +03:00
e1b9d1284f Rename project to DSpace Statistics API
At first I called it "CGSpace" because I was making it specifically
for our CGSpace DSpace repository, but the potential here is bigger
than that!
2018-09-23 23:02:21 +03:00
13 changed files with 244 additions and 75 deletions

1
.gitignore vendored
View File

@ -1,3 +1,2 @@
__pycache__
venv
*.db

9
.travis.yml Normal file
View File

@ -0,0 +1,9 @@
language: python
python:
- "3.5"
- "3.6"
- "3.7"
install:
- pip install -r requirements.txt
# vim: ts=2 sw=2 et

View File

@ -4,6 +4,47 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.3.2] - 2018-09-25
## Changed
- /item/id route now returns HTTP 404 if an item is not found
## [0.3.1] - 2018-09-25
### Changed
- Force SolrClient's kazoo dependency to version 2.5.0 to work with Python 3.7
- Add Python 3.7 to Travis CI configuration
## [0.3.0] - 2018-09-25
### Added
- requirements.txt for pip
- Travis CI build configuration for Python 3.5 and 3.6
- Documentation on using the API
### Changed
- The "all items" route from / to /items
## [0.2.1] - 2018-09-24
### Changed
- Environment settings in example systemd unit files
- Use psycopg2.extras.DictCursor for PostgreSQL connection
## [0.2.0] - 2018-09-24
### Changed
- Use PostgreSQL instead of SQLite because UPSERT support needs a very new libsqlite3 whereas it's already in PostgreSQL 9.5+
## [0.1.0] - 2018-09-24
### Changed
- Rename project to "DSpace Statistics API"
- Use read-only database connection in API
- Update systemd units for CGSpace→DSpace rename
- Use UPSERT to simplify database schema and Python logic
### Added
- Example systemd service and timer unit for indexer service
- Add top-level route to expose all item statistics
### Removed
- Ability to customize SOLR_CORE variable
## [0.0.4] - 2018-09-23
### Added
- Added example systemd unit file for API

View File

@ -1,20 +1,30 @@
# CGSpace Statistics API
# DSpace Statistics API
A quick and dirty REST API to expose Solr view and download statistics for items in a DSpace repository.
Written and tested in Python 3.6. SolrClient (0.2.1) does not currently run in Python 3.7.0.
Written and tested in Python 3.5, 3.6, and 3.7. Requires PostgreSQL version 9.5 or greater for [`UPSERT` support](https://wiki.postgresql.org/wiki/UPSERT).
## Installation
Create a virtual environment and run it:
$ virtualenv -p /usr/bin/python3.6 venv
$ python -m venv venv
$ . venv/bin/activate
$ pip install falcon gunicorn SolrClient
$ pip install -r requirements.txt
$ gunicorn app:api
## Using the API
The API exposes the following endpoints:
- GET `/items`return views and downloads for all items that Solr knows about¹. Accepts `limit` and `page` query parameters for pagination of results.
- GET `/item/id`return views and downloads for a single item (*id* must be a positive integer). Returns HTTP 404 if an item id is not found.
¹ We are querying the Solr statistics core, which technically only knows about items that have either views or downloads.
## Todo
- Ability to return a paginated list of items (on a different route?)
- Add API documentation
- Close up DB connection when gunicorn shuts down gracefully
- Better logging
- Tests
## License
This work is licensed under the [GPLv3](https://www.gnu.org/licenses/gpl-3.0.en.html).

77
app.py
View File

@ -1,45 +1,72 @@
# Tested with Python 3.6
# See DSpace Solr docs for tips about parameters
# https://wiki.duraspace.org/display/DSPACE/Solr
from config import SOLR_CORE
from database import database_connection
import falcon
from solr import solr_connection
db = database_connection()
db.set_session(readonly=True)
solr = solr_connection()
class AllItemsResource:
def on_get(self, req, resp):
"""Handles GET requests"""
# Return HTTPBadRequest if id parameter is not present and valid
limit = req.get_param_as_int("limit", min=0, max=100) or 100
page = req.get_param_as_int("page", min=0) or 0
offset = limit * page
cursor = db.cursor()
# get total number of items so we can estimate the pages
cursor.execute('SELECT COUNT(id) FROM items')
pages = round(cursor.fetchone()[0] / limit)
# get statistics, ordered by id, and use limit and offset to page through results
cursor.execute('SELECT id, views, downloads FROM items ORDER BY id ASC LIMIT {} OFFSET {}'.format(limit, offset))
results = cursor.fetchmany(limit)
cursor.close()
# create a list to hold dicts of item stats
statistics = list()
# iterate over results and build statistics object
for item in results:
statistics.append({ 'id': item['id'], 'views': item['views'], 'downloads': item['downloads'] })
message = {
'currentPage': page,
'totalPages': pages,
'limit': limit,
'statistics': statistics
}
resp.media = message
class ItemResource:
def on_get(self, req, resp, item_id):
"""Handles GET requests"""
cursor = db.cursor()
# get item views (and catch the TypeError if item doesn't have any views)
cursor.execute('SELECT views FROM itemviews WHERE id={0}'.format(item_id))
try:
views = cursor.fetchone()['views']
except:
views = 0
cursor.execute('SELECT views, downloads FROM items WHERE id={}'.format(item_id))
if cursor.rowcount == 0:
raise falcon.HTTPNotFound(
title='Item not found',
description='The item with id "{}" was not found.'.format(item_id)
)
else:
results = cursor.fetchone()
# get item downloads (and catch the TypeError if item doesn't have any downloads)
cursor.execute('SELECT downloads FROM itemdownloads WHERE id={0}'.format(item_id))
try:
downloads = cursor.fetchone()['downloads']
except:
downloads = 0
statistics = {
'id': item_id,
'views': results['views'],
'downloads': results['downloads']
}
resp.media = statistics
cursor.close()
statistics = {
'id': item_id,
'views': views,
'downloads': downloads
}
resp.media = statistics
api = falcon.API()
api.add_route('/items', AllItemsResource())
api.add_route('/item/{item_id:int}', ItemResource())
# vim: set sw=4 ts=4 expandtab:

View File

@ -2,8 +2,10 @@ import os
# Check if Solr connection information was provided in the environment
SOLR_SERVER = os.environ.get('SOLR_SERVER', 'http://localhost:8080/solr')
SOLR_CORE = os.environ.get('SOLR_CORE', 'statistics')
SQLITE_DB = os.environ.get('SQLITE_DB', 'statistics.db')
DATABASE_NAME = os.environ.get('DATABASE_NAME', 'dspacestatistics')
DATABASE_USER = os.environ.get('DATABASE_USER', 'dspacestatistics')
DATABASE_PASS = os.environ.get('DATABASE_PASS', 'dspacestatistics')
DATABASE_HOST = os.environ.get('DATABASE_HOST', 'localhost')
# vim: set sw=4 ts=4 expandtab:

View File

@ -1,18 +0,0 @@
[Unit]
Description=CGSpace Statistics API
After=network.target
[Service]
Environment=SOLR_SERVER=http://localhost:8081/solr
Environment=SOLR_CORE=statistics
User=nobody
Group=nogroup
WorkingDirectory=/opt/ilri/cgspace-statistics-api
ExecStart=/opt/ilri/cgspace-statistics-api/venv/bin/gunicorn \
--bind 127.0.0.1:5000 \
app:api
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,20 @@
[Unit]
Description=DSpace Statistics API
After=network.target
[Service]
Environment=DATABASE_NAME=dspacestatistics
Environment=DATABASE_USER=dspacestatistics
Environment=DATABASE_PASS=dspacestatistics
Environment=DATABASE_HOST=localhost
User=nobody
Group=nogroup
WorkingDirectory=/opt/ilri/dspace-statistics-api
ExecStart=/opt/ilri/dspace-statistics-api/venv/bin/gunicorn \
--bind 127.0.0.1:5000 \
app:api
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,17 @@
[Unit]
Description=DSpace Statistics Indexer
After=tomcat7.target
[Service]
Environment=SOLR_SERVER=http://localhost:8081/solr
Environment=DATABASE_NAME=dspacestatistics
Environment=DATABASE_USER=dspacestatistics
Environment=DATABASE_PASS=dspacestatistics
Environment=DATABASE_HOST=localhost
User=nobody
Group=nogroup
WorkingDirectory=/opt/ilri/dspace-statistics-api
ExecStart=/opt/ilri/dspace-statistics-api/venv/bin/python indexer.py
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,12 @@
[Unit]
Description=DSpace Statistics Indexer
[Timer]
# twice a day, at 6AM and 6PM
OnCalendar=*-*-* 06:00:00,18:00:00
# Add a random delay of 03600 seconds
RandomizedDelaySec=3600
Persistent=true
[Install]
WantedBy=timers.target

View File

@ -1,10 +1,12 @@
from config import SQLITE_DB
import sqlite3
from config import DATABASE_NAME
from config import DATABASE_USER
from config import DATABASE_PASS
from config import DATABASE_HOST
import psycopg2
import psycopg2.extras
def database_connection():
connection = sqlite3.connect(SQLITE_DB)
# allow iterating over row results by column key
connection.row_factory = sqlite3.Row
connection = psycopg2.connect("dbname={} user={} password={} host='{}'".format(DATABASE_NAME, DATABASE_USER, DATABASE_PASS, DATABASE_HOST), cursor_factory=psycopg2.extras.DictCursor)
return connection

View File

@ -1,10 +1,35 @@
#!/usr/bin/env python
#
# Tested with Python 3.6
# See DSpace Solr docs for tips about parameters
# https://wiki.duraspace.org/display/DSPACE/Solr
# indexer.py
#
# Copyright 2018 Alan Orth.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# ---
#
# Connects to a DSpace Solr statistics core and ingests item views and downloads
# into a PostgreSQL database for use by other applications (like an API).
#
# This script is written for Python 3.5+ and requires several modules that you
# can install with pip (I recommend using a Python virtual environment):
#
# $ pip install SolrClient psycopg2-binary
#
# See: https://solrclient.readthedocs.io/en/latest/SolrClient.html
# See: https://wiki.duraspace.org/display/DSPACE/Solr
from config import SOLR_CORE
from database import database_connection
from solr import solr_connection
@ -12,7 +37,7 @@ def index_views():
print("Populating database with item views.")
# determine the total number of items with views (aka Solr's numFound)
res = solr.query(SOLR_CORE, {
res = solr.query('statistics', {
'q':'type:2',
'fq':'isBot:false AND statistics_type:view',
'facet':True,
@ -25,10 +50,12 @@ def index_views():
results_num_pages = round(results_numFound / results_per_page)
results_current_page = 0
while results_current_page <= results_num_pages:
print('Page {0} of {1}.'.format(results_current_page, results_num_pages))
cursor = db.cursor()
res = solr.query(SOLR_CORE, {
while results_current_page <= results_num_pages:
print('Page {} of {}.'.format(results_current_page, results_num_pages))
res = solr.query('statistics', {
'q':'type:2',
'fq':'isBot:false AND statistics_type:view',
'facet':True,
@ -43,17 +70,21 @@ def index_views():
views = res.get_facets()
# in this case iterate over the 'id' dict and get the item ids and views
for item_id, item_views in views['id'].items():
db.execute('''REPLACE INTO itemviews VALUES (?, ?)''', (item_id, item_views))
cursor.execute('''INSERT INTO items(id, views) VALUES(%s, %s)
ON CONFLICT(id) DO UPDATE SET downloads=excluded.views''',
(item_id, item_views))
db.commit()
results_current_page += 1
cursor.close()
def index_downloads():
print("Populating database with item downloads.")
# determine the total number of items with downloads (aka Solr's numFound)
res = solr.query(SOLR_CORE, {
res = solr.query('statistics', {
'q':'type:0',
'fq':'isBot:false AND statistics_type:view AND bundleName:ORIGINAL',
'facet':True,
@ -66,10 +97,12 @@ def index_downloads():
results_num_pages = round(results_numFound / results_per_page)
results_current_page = 0
while results_current_page <= results_num_pages:
print('Page {0} of {1}.'.format(results_current_page, results_num_pages))
cursor = db.cursor()
res = solr.query(SOLR_CORE, {
while results_current_page <= results_num_pages:
print('Page {} of {}.'.format(results_current_page, results_num_pages))
res = solr.query('statistics', {
'q':'type:0',
'fq':'isBot:false AND statistics_type:view AND bundleName:ORIGINAL',
'facet':True,
@ -84,20 +117,23 @@ def index_downloads():
downloads = res.get_facets()
# in this case iterate over the 'owningItem' dict and get the item ids and downloads
for item_id, item_downloads in downloads['owningItem'].items():
db.execute('''REPLACE INTO itemdownloads VALUES (?, ?)''', (item_id, item_downloads))
cursor.execute('''INSERT INTO items(id, downloads) VALUES(%s, %s)
ON CONFLICT(id) DO UPDATE SET downloads=excluded.downloads''',
(item_id, item_downloads))
db.commit()
results_current_page += 1
cursor.close()
db = database_connection()
solr = solr_connection()
# use separate views and downloads tables so we can REPLACE INTO carelessly (ie, item may have views but no downloads)
db.execute('''CREATE TABLE IF NOT EXISTS itemviews
(id integer primary key, views integer)''')
db.execute('''CREATE TABLE IF NOT EXISTS itemdownloads
(id integer primary key, downloads integer)''')
# create table to store item views and downloads
cursor = db.cursor()
cursor.execute('''CREATE TABLE IF NOT EXISTS items
(id INT PRIMARY KEY, views INT DEFAULT 0, downloads INT DEFAULT 0)''')
index_views()
index_downloads()

12
requirements.txt Normal file
View File

@ -0,0 +1,12 @@
certifi==2018.8.24
chardet==3.0.4
falcon==1.4.1
gunicorn==19.9.0
idna==2.7
kazoo==2.5.0
psycopg2-binary==2.7.5
python-mimeparse==1.6.0
requests==2.19.1
six==1.11.0
SolrClient==0.2.1
urllib3==1.23