1
0
mirror of https://github.com/ilri/csv-metadata-quality.git synced 2024-12-23 12:34:36 +01:00
A simple but opinionated metadata quality checker and fixer designed to work with CSVs in the DSpace ecosystem.
Go to file
2020-01-15 12:19:17 +02:00
csv_metadata_quality Add utility function to check normalization 2020-01-15 12:17:52 +02:00
data Add Unicode normalization 2020-01-15 11:37:54 +02:00
tests Run black, isort, and flake8. 2020-01-15 11:41:31 +02:00
.build.yml .build.yml: Enable experimental CLI checks on SourceHut 2019-09-26 14:11:35 +03:00
.flake8 Add flake8 to pipenv dev environment 2019-07-28 17:46:30 +03:00
.gitignore .gitignore: Ignore egg cache from distutils 2019-07-31 17:38:54 +03:00
.travis.yml Support Python 3.6 and 3.7 again 2020-01-15 12:19:17 +02:00
CHANGELOG.md Version 0.4.0 2020-01-15 11:44:56 +02:00
LICENSE.txt Add GPLv3 license 2019-07-26 22:16:16 +03:00
Pipfile Pipfile: Specify exact version of black 2019-12-14 12:41:28 +02:00
Pipfile.lock Pipfile.lock: Run pipenv update 2020-01-15 10:51:19 +02:00
pytest.ini pytest.ini: Ignore deprecation warnings 2019-07-31 13:33:20 +03:00
README.md Add Unicode normalization 2020-01-15 11:37:54 +02:00
requirements-dev.txt Update python requirements 2020-01-15 10:51:58 +02:00
requirements.txt Update python requirements 2020-01-15 10:51:58 +02:00
setup.cfg Add configuration for isort 2019-08-29 01:14:31 +03:00
setup.py Support Python 3.6 and 3.7 again 2020-01-15 12:19:17 +02:00

CSV Metadata Quality Build Status builds.sr.ht status

A simple, but opinionated metadata quality checker and fixer designed to work with CSVs in the DSpace ecosystem (though it could theoretically work on any CSV that uses Dublin Core fields as columns). The implementation is essentially a pipeline of checks and fixes that begins with splitting multi-value fields on the standard DSpace "||" separator, trimming leading/trailing whitespace, and then proceeding to more specialized cases like ISSNs, ISBNs, languages, etc.

Requires Python 3.8 or greater. CSV and Excel support comes from the Pandas library, though your mileage may vary with Excel because this is much less tested.

Functionality

  • Validate dates, ISSNs, ISBNs, and multi-value separators ("||")
  • Validate languages against ISO 639-1 (alpha2) and ISO 639-3 (alpha3)
  • Experimental validation of titles and abstracts against item's Dublin Core language field
  • Validate subjects against the AGROVOC REST API (see the --agrovoc-fields option)
  • Fix leading, trailing, and excessive (ie, more than one) whitespace
  • Fix invalid multi-value separators (|) using --unsafe-fixes
  • Fix problematic newlines (line feeds) using --unsafe-fixes
  • Remove unnecessary Unicode like non-breaking spaces, replacement characters, etc
  • Check for "suspicious" characters that indicate encoding or copy/paste issues, for example "foreˆt" should be "forêt"
  • Remove duplicate metadata values
  • Perform Unicode normalization on strings using --unsafe-fixes

Installation

The easiest way to install CSV Metadata Quality is with pipenv:

$ git clone https://github.com/ilri/csv-metadata-quality.git
$ cd csv-metadata-quality
$ pipenv install
$ pipenv shell

Otherwise, if you don't have pipenv, you can use a vanilla Python virtual environment:

$ git clone https://github.com/ilri/csv-metadata-quality.git
$ cd csv-metadata-quality
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt

Usage

Run CSV Metadata Quality with the --help flag to see available options:

$ csv-metadata-quality --help

To validate and clean a CSV file you must specify input and output files using the -i and -o options. For example, using the included test file:

$ csv-metadata-quality -i data/test.csv -o /tmp/test.csv

Unsafe Fixes

You can enable several "unsafe" fixes with the --unsafe-fixes option. Currently this will attempt to fix invalid multi-value separators and remove newlines.

Invalid Multi-Value Separators

This is considered "unsafe" because it is theoretically possible for a single | character to be used legitimately in a metadata value, though in my experience it is always a typo. For example, if a user mistakenly writes Kenya|Tanzania when attempting to indicate two countries, the result will be one metadata value with the literal text Kenya|Tanzania. The --unsafe-fixes option will correct the invalid multi-value separator so that there are two metadata values, ie Kenya||Tanzania.

Newlines

This is considered "unsafe" because some systems give special importance to vertical space and render it properly. DSpace does not support rendering newlines in its XMLUI and has, at times, suffered from parsing errors that cause the import process to fail if an input file had newlines. The --unsafe-fixes option strips Unix line feeds (U+000A).

Unicode Normalization

Unicode is a standard for encoding text. As the standard aims to support most of the world's languages, characters can often be represented in different ways and still be valid Unicode. This leads to interesting problems that can be confusing unless you know what's going on behind the scenes. For example, the characters and é look the same, but are nottechnically they refer to different code points in the Unicode standard:

  • é is the Unicode code point U+00E9
  • is the Unicode code points U+0065 + U+0301

Read more about Unicode normalization.

AGROVOC Validation

You can enable validation of metadata values in certain fields against the AGROVOC REST API with the --agrovoc-fields option. For example, in addition to agricultural subjects, many countries and regions are also present AGROVOC. Enable this validation by specifying a comma-separated list of fields:

$ csv-metadata-quality -i data/test.csv -o /tmp/test.csv -u --agrovoc-fields dc.subject,cg.coverage.country
...
Invalid AGROVOC (dc.subject): FOREST
Invalid AGROVOC (cg.coverage.country): KENYAA

Note: Requests to the AGROVOC REST API are cached using requests_cache to speed up subsequent runs with the same data and to be kind to the system's administrators.

Experimental Checks

You can enable experimental support for validating whether the value of an item's dc.language.iso or dcterms.language field matches the actual language used in its title, abstract, and citation.

$ csv-metadata-quality -i data/test.csv -o /tmp/test.csv -e
...
Possibly incorrect language es (detected en): Incorrect ISO 639-1 language
Possibly incorrect language spa (detected eng): Incorrect ISO 639-3 language

This currently uses the Python langid library. In the future I would like to move to the fastText library, but there is currently an issue with their Python bindings that makes this unfeasible.

Todo

  • Reporting / summary
  • Better logging, for example with INFO, WARN, and ERR levels
  • Verbose, debug, or quiet options
  • Warn if an author is shorter than 3 characters?
  • Validate dc.rights field against SPDX? Perhaps with an option like -m spdx to enable the spdx module?
  • Validate DOIs? Normalize to https://doi.org format? Or use just the DOI part: 10.1016/j.worlddev.2010.06.006
  • Warn if two items use the same file in filename column
  • Add an option to drop invalid AGROVOC subjects?
  • Add tests for application invocation, ie tests/test_app.py?

License

This work is licensed under the GPLv3.

The license allows you to use and modify the work for personal and commercial purposes, but if you distribute the work you must provide users with a means to access the source code for the version you are distributing. Read more about the GPLv3 at TL;DR Legal.