Compare commits

...

383 Commits

Author SHA1 Message Date
Alan Orth 2341c56c40
poetry.lock: run poetry update 2024-04-25 12:50:30 +03:00
Alan Orth 5be2195325
Add fix for normalizing DOIs 2024-04-25 12:49:19 +03:00
Alan Orth 736948ed2c
csv_metadata_quality/check.py: run rye fmt 2024-04-12 13:40:55 +03:00
Alan Orth ee0b448355
csv_metadata_quality/check.py: remove unused import 2024-04-12 11:07:36 +03:00
Alan Orth 4f3174a543
CHANGELOG.md: add note about SPDX license list
continuous-integration/drone/push Build is passing Details
2024-03-02 10:39:00 +03:00
Alan Orth d5c25f82fa
Update SPDX license list
From: https://github.com/spdx/license-list-data/blob/main/json/licenses.json
2024-03-02 10:38:27 +03:00
Alan Orth 7b3e2b4e68
Merge pull request #43 from ilri/renovate/pytest-7.x-lockfile
continuous-integration/drone/push Build is passing Details
chore(deps): update dependency pytest to v7.4.4
2024-01-05 16:40:13 +03:00
Alan Orth f92b2fe206
Merge pull request #44 from ilri/renovate/flake8-7.x
chore(deps): update dependency flake8 to v7
2024-01-05 16:25:22 +03:00
renovate[bot] df040b70c7
chore(deps): update dependency flake8 to v7
continuous-integration/drone/push Build is passing Details
2024-01-05 00:58:28 +00:00
renovate[bot] 10bc8f3e14
chore(deps): update dependency pytest to v7.4.4
continuous-integration/drone/push Build is passing Details
2023-12-31 13:47:46 +00:00
Alan Orth 7e6e92ecaa
poetry.lock: run poetry lock
continuous-integration/drone/push Build is passing Details
2023-12-28 14:12:03 +03:00
Alan Orth a21ffb0fa8
Use py3langid instead of langid
Faster and more modern code for Python 3 as a drop-in replacement.

See: https://adrien.barbaresi.eu/blog/language-detection-langid-py-faster.html
2023-12-28 14:11:21 +03:00
Alan Orth fb341dd9fa
Merge pull request #37 from ilri/renovate/actions-setup-python-5.x
chore(deps): update actions/setup-python action to v5
2023-12-28 09:02:41 +03:00
Alan Orth 2e943ee4db
Merge pull request #39 from ilri/renovate/isort-5.x-lockfile
chore(deps): update dependency isort to v5.13.2
2023-12-28 09:01:48 +03:00
Alan Orth 6d3a9870d6
Merge pull request #41 from ilri/renovate/pycountry-23.x-lockfile
fix(deps): update dependency pycountry to v23.12.11
2023-12-28 09:01:21 +03:00
Alan Orth 82ecf7119a
Merge pull request #42 from ilri/renovate/black-23.x-lockfile
chore(deps): update dependency black to v23.12.1
2023-12-28 09:00:39 +03:00
renovate[bot] 1db21cf275
chore(deps): update dependency black to v23.12.1
continuous-integration/drone/push Build is passing Details
2023-12-23 00:35:13 +00:00
renovate[bot] bcd1408798
chore(deps): update dependency isort to v5.13.2
continuous-integration/drone/push Build is passing Details
2023-12-13 22:21:38 +00:00
renovate[bot] ee8d255811
fix(deps): update dependency pycountry to v23.12.11
continuous-integration/drone/push Build is passing Details
2023-12-11 21:50:09 +00:00
Alan Orth 2cc2dbe952
tests: apply fixes from fixit
continuous-integration/drone/push Build is passing Details
RewriteToLiteral: It's slower to call list() than using the empty literal
2023-12-09 12:20:35 +03:00
Alan Orth 940a325d61
poetry.lock: run poetry lock 2023-12-09 12:05:26 +03:00
Alan Orth 59b3b307c9
pyproject.toml: use official pycountry
The project is moving again and has all the latest data from the
iso-codes project.
2023-12-09 12:04:14 +03:00
Alan Orth b305da3f0b
poetry.lock: run poetry update
continuous-integration/drone/push Build is failing Details
2023-12-07 17:10:01 +03:00
renovate[bot] 96a486471c
Update actions/setup-python action to v5
continuous-integration/drone/push Build is passing Details
2023-12-06 13:13:11 +00:00
Alan Orth 530cd5863b
poetry.lock: run poetry update
continuous-integration/drone/push Build is passing Details
2023-11-22 22:07:30 +03:00
Alan Orth f6018c51b6
Apply fixes from fixit
Apply recommended fix from fixit:

    RewriteToLiteral: It's slower to call list() than using the empty literal, because the name list must
    be looked up in the global scope in case it has been rebound.
2023-11-22 21:54:50 +03:00
Alan Orth 80c3f5b45a
Add fixit to dev dependencies 2023-11-22 21:54:09 +03:00
Alan Orth ba4637ea34
Merge pull request #31 from ilri/renovate/black-23.x-lockfile
continuous-integration/drone/push Build is passing Details
Update dependency black to v23.11.0
2023-11-20 21:41:43 +03:00
Alan Orth 355428a691
Merge pull request #32 from ilri/renovate/country-converter-1.x
Update dependency country-converter to ~1.1.0
2023-11-20 21:39:36 +03:00
renovate[bot] 58d4de973e
Update dependency country-converter to ~1.1.0
continuous-integration/drone/push Build is failing Details
2023-11-20 18:37:44 +00:00
Alan Orth e1216dae3c
Merge pull request #33 from ilri/renovate/pandas-2.x-lockfile
Update dependency pandas to v2.1.3
2023-11-20 21:36:20 +03:00
renovate[bot] 6b650ff1b3
Update dependency pandas to v2.1.3
continuous-integration/drone/push Build is failing Details
2023-11-20 18:33:42 +00:00
Alan Orth fa7bde6fc0
Merge pull request #34 from ilri/renovate/requests-cache-1.x-lockfile
Update dependency requests-cache to v1.1.1
2023-11-20 21:32:50 +03:00
renovate[bot] f89159fe32
Update dependency requests-cache to v1.1.1
continuous-integration/drone/push Build is passing Details
2023-11-19 09:26:49 +00:00
renovate[bot] 02058c5a65
Update dependency black to v23.11.0
continuous-integration/drone/push Build is passing Details
2023-11-08 07:49:15 +00:00
Alan Orth 8fed6b71ff
Merge pull request #30 from ilri/renovate/ipython-8.x-lockfile
continuous-integration/drone/push Build is passing Details
Update dependency ipython to v8.17.2
2023-10-31 22:15:50 +03:00
Alan Orth b005b28cbe
Merge pull request #29 from ilri/renovate/pandas-2.x-lockfile
Update dependency pandas to v2.1.2
2023-10-31 22:15:27 +03:00
renovate[bot] c626290599
Update dependency ipython to v8.17.2
continuous-integration/drone/push Build is passing Details
2023-10-31 13:47:08 +00:00
renovate[bot] 1a06470b64
Update dependency pandas to v2.1.2
continuous-integration/drone/push Build is passing Details
2023-10-26 23:01:25 +00:00
Alan Orth d46a81672e
Merge pull request #28 from ilri/renovate/pytest-7.x-lockfile
continuous-integration/drone/push Build is passing Details
Update dependency pytest to v7.4.3
2023-10-25 12:08:23 +03:00
Alan Orth 2a50e75082
Merge pull request #27 from ilri/renovate/csvkit-1.x-lockfile
Update dependency csvkit to v1.3.0
2023-10-25 12:08:05 +03:00
Alan Orth 0d45e73983
Merge pull request #25 from ilri/renovate/black-23.x-lockfile
Update dependency black to v23.10.1
2023-10-25 12:07:15 +03:00
renovate[bot] 3611aab425
Update dependency pytest to v7.4.3
continuous-integration/drone/push Build is passing Details
2023-10-24 22:36:05 +00:00
renovate[bot] 5c4ad0eb41
Update dependency black to v23.10.1
continuous-integration/drone/push Build is passing Details
2023-10-23 20:03:53 +00:00
renovate[bot] f1f39722f6
Update dependency csvkit to v1.3.0
continuous-integration/drone/push Build is passing Details
2023-10-18 07:56:03 +00:00
Alan Orth 1c03999582
Merge pull request #24 from ilri/renovate/actions-checkout-4.x
continuous-integration/drone/push Build is passing Details
Update actions/checkout action to v4
2023-10-15 23:39:45 +03:00
Alan Orth 1f637f32cd
Rework requests-cache
We should only be running this once per invocation, not for every
row we check. This should be more efficient, but it means that we
don't cache responses when running via pytest, which is actually
probably a good thing.
2023-10-15 23:37:38 +03:00
Alan Orth b8241e919d
poetry.lock: run poetry update 2023-10-15 23:22:48 +03:00
Alan Orth b8dc19cc3f
csv_metadata_quality/check.py: enable requests-cache
This was disabled at some point. We also need to use the new delete
method instead.
2023-10-15 23:21:58 +03:00
Alan Orth 93c9b739ac
csv_metadata_quality/check.py: use HTTPS
Use HTTPS for AGROVOC REST API.
2023-10-15 22:38:45 +03:00
Alan Orth 4ed2786703
pyproject.toml: update pycountry
Use the latest branch in my fork that has iso-codes 4.15.0.
2023-10-15 21:53:09 +03:00
renovate[bot] 8728789183
Update actions/checkout action to v4
continuous-integration/drone/push Build is passing Details
2023-09-04 14:26:25 +00:00
Alan Orth bf90464809
poetry.lock: run poetry update
continuous-integration/drone/push Build is failing Details
continuous-integration/drone Build is passing Details
2023-08-08 09:55:41 +02:00
Alan Orth 1878002391 poetry.lock: run poetry update
continuous-integration/drone/push Build is passing Details
2023-06-12 10:42:50 +03:00
Alan Orth d21d2621e3 csv_metadata_quality/app.py: read fields as strings
I suspect this undermines the PyArrow backend performance gains in
recent Pandas 2.0.0, but we are dealing with messy data sometimes
and we must rely on data being strings.
2023-06-12 10:42:50 +03:00
Alan Orth f3fb1ff7fb Don't crash when title is missing
We shouldn't crash the country/region checker/fixer when the title
field is missing, since we only use it to show status to the user.
2023-06-12 10:42:50 +03:00
Alan Orth 1fa81f7558
Merge pull request #13 from ilri/renovate/ipython-8.x-lockfile
continuous-integration/drone/push Build is passing Details
Update dependency ipython to v8.14.0
2023-06-03 17:09:21 +03:00
renovate[bot] 7409193b6b
Update dependency ipython to v8.14.0
continuous-integration/drone/push Build is passing Details
2023-06-02 15:58:34 +00:00
Alan Orth a84fcf0b7b
.drone.yml: try to use poetry instead of pip
continuous-integration/drone/push Build is passing Details
2023-05-30 11:39:08 +03:00
Alan Orth 25ac290df4
.github: update Python actions
continuous-integration/drone/push Build is failing Details
We don't need to use `python setup.py install` anymore. We can use
poetry directly in CI.

See: https://github.com/actions/setup-python/blob/main/docs/advanced-usage.md
2023-05-29 22:58:01 +03:00
Alan Orth 3f52bad1e3
Remove setup.py
As far as I understand this is deprecated.
2023-05-29 22:41:37 +03:00
Alan Orth 0208ad0ade
Merge pull request #12 from ilri/renovate/requests-cache-1.x
Update dependency requests-cache to v1
2023-05-29 22:37:23 +03:00
renovate[bot] 3632ae0fc9
Update dependency requests-cache to v1
continuous-integration/drone/push Build is passing Details
2023-05-29 19:25:58 +00:00
Alan Orth 17d089cc6e
poetry.lock: run poetry update
continuous-integration/drone/push Build is passing Details
2023-05-29 22:24:22 +03:00
Alan Orth bc470a4343
pyproject.toml: rework pandas and pyarrow
We don't explicitly depend on PyArrow. It should come as a pandas
extra. I installed it like this:

    $ poetry add pandas=="^2.0.2[feather,performance]"

See: https://pandas.pydata.org/docs/getting_started/install.html#other-data-sources
2023-05-29 22:24:04 +03:00
Alan Orth be609a809d
setup.py: add Python 3.11 classifier 2023-05-29 21:32:59 +03:00
Alan Orth de3387ded7
Use Python 3.11 in Drone CI and GitHub Actions 2023-05-29 21:31:03 +03:00
Alan Orth f343e87f0c
renovate.json: fix json 2023-05-29 21:26:03 +03:00
Alan Orth 7d3524fbd5
renovate.json: disable requirements.txt support
Poetry is used to manage dependencies. The requirements.txt files
are generated manually by exporting from Poetry.
2023-05-29 21:11:48 +03:00
Alan Orth c614b71a52
Merge pull request #5 from ilri/renovate/configure
Configure Renovate
2023-05-29 21:02:16 +03:00
renovate[bot] d159a839f3
Add renovate.json 2023-05-29 17:40:33 +00:00
Alan Orth 36e2ebe5f4
poetry.lock: run poetry update
continuous-integration/drone/push Build is passing Details
2023-05-10 15:06:41 +03:00
Alan Orth 33f67b7a7c
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2023-05-03 14:29:12 +03:00
Alan Orth c0e1448439
poetry.lock: run poetry update 2023-05-03 14:28:47 +03:00
Alan Orth 5d0804a08f
Update requirements
continuous-integration/drone/push Build is failing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2023-04-22 12:44:54 -07:00
Alan Orth f01c9edf17
poetry.lock: run poetry update 2023-04-22 12:44:16 -07:00
Alan Orth 8d4295b2b3
CHANGELOG.md: add note about description field 2023-04-22 12:17:44 -07:00
Alan Orth e2d46e9495
csv_metadata_quality/app.py: skip newline fix on description
The description field often has free-form text like the abstract and
there are too many legitimate newlines here to be correcting them
automatically.
2023-04-22 12:16:13 -07:00
Alan Orth 1491e1edb0
Fix path to data/licenses.json
continuous-integration/drone/push Build is passing Details
When we install and run this from CI, this file needs to exist in
the package's folder inside site-packages. Then we can use __file__
to get the path relative to the package.

See: https://python-packaging.readthedocs.io/en/latest/non-code-files.html
2023-04-05 15:28:21 +03:00
Alan Orth 34142c3e6b
Update requirements
continuous-integration/drone/push Build is failing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2023-04-05 12:51:56 +03:00
Alan Orth 0c88b96e8d
poetry.lock: run poetry update 2023-04-05 12:51:19 +03:00
Alan Orth 2e55b4d6e3
pyproject.toml: add pyarrow explicitly
CI was failing because pyarrow is not an extra provided by pandas.
Indeed, according to the docs the named extras installing pyarrow
are actually feather and parquet, so we need to install pyarrow
explicitly.

See: https://pandas.pydata.org/pandas-docs/version/2.0/getting_started/install.html#install-dependencies
2023-04-05 12:49:40 +03:00
Alan Orth c90aad29f0
Use poetry dev group
This is the new syntax since Poetry 1.2.0.

See: https://python-poetry.org/docs/managing-dependencies/#installing-group-dependencies
2023-04-05 12:37:03 +03:00
Alan Orth 6fd1e1377f
Add pyarrow extra to Python Pandas deps 2023-04-05 11:40:22 +03:00
Alan Orth c64b7eb1f1
CHANGELOG.md: add note about Pandas 2.0.0 2023-04-05 11:17:48 +03:00
Alan Orth 29cbc4f3a3
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2023-04-05 11:17:06 +03:00
Alan Orth 307af1acfc
poetry.lock: run poetry update 2023-04-05 11:15:55 +03:00
Alan Orth b5106de9df
pyproject.toml: Pandas 2.0.0 2023-04-05 11:15:40 +03:00
Alan Orth 9eeadfc44e
poetry.lock: after adding pandas 2.0.0rc1
continuous-integration/drone/push Build is failing Details
This is going to be an issue on the master branch if I update any
dependencies in the mean time...
2023-03-22 12:17:26 +03:00
Alan Orth d4aed378cf
Switch to pandas 2.0.0rc1
Seems to work fine with the new PyArrow datatypes.
2023-03-22 12:16:56 +03:00
Alan Orth 20a2cce34b
CHANGELOG.md: add fixes
continuous-integration/drone/push Build is failing Details
2023-03-10 16:17:20 +03:00
Alan Orth d661ffe439
Check comma space on bibliographicCitation too
The regex was only matching `dc.identifier.citation`, but we need
to match `dcterms.bibliographicCitation` too.
2023-03-10 16:13:16 +03:00
Alan Orth 45a310387a
Don't fix multi-value separators on citations 2023-03-10 16:12:30 +03:00
Alan Orth 47b03c49ba
README.md: Update TODOs
continuous-integration/drone/push Build is failing Details
2023-03-07 10:45:04 +03:00
Alan Orth 986b81cbf4
Update requirements
continuous-integration/drone/push Build is failing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2023-03-04 07:35:36 +03:00
Alan Orth d43a47ae32
poetry.lock: run poetry update 2023-03-04 07:34:50 +03:00
Alan Orth ede37569f1
pyproject.toml: use pycountry with iso-codes 4.13.0 2023-03-04 07:33:48 +03:00
Alan Orth 0c53efe60a
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2023-03-04 06:54:34 +03:00
Alan Orth 5f0e25b818
poetry.lock: run poetry update 2023-03-04 06:53:55 +03:00
Alan Orth 4776154d6c
pyproject.toml: switch back to upstream country_converter
Version 1.0.0 incorporates my change to Myanmar.

See: https://github.com/IndEcol/country_converter/releases/tag/v1.0.0
2023-03-04 06:52:56 +03:00
Alan Orth fdccdf7318
Version 0.6.1
continuous-integration/drone/push Build is failing Details
2023-02-23 13:46:56 +03:00
Alan Orth ff2c986eec
setup.py: minimum python 3.9 2023-02-23 11:47:40 +03:00
Alan Orth 547574866e
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2023-02-23 11:46:24 +03:00
Alan Orth 8aa7b93d87
poetry.lock: run poetry update 2023-02-23 11:45:53 +03:00
Alan Orth 53fdb50906
csv_metadata_quality/check.py: run black
continuous-integration/drone/push Build is failing Details
2023-02-18 22:10:04 +03:00
Alan Orth 3e0e9a7f8b
poetry.lock: run poetry update 2023-02-18 22:09:33 +03:00
Alan Orth 03d824b78e
pyproject.toml: update some dependencies 2023-02-18 22:09:05 +03:00
Alan Orth 8bc4cd419c
Strip filename descriptions before checking
continuous-integration/drone/push Build is failing Details
When checking for uncommon file extensions in the filename field
we should strip descriptions that are meant for SAF Bundler, for
example: Annual_Report_2020.pdf__description:Report. This ends up
as a false positive that spams the output with warnings.
2023-02-13 11:00:57 +03:00
Alan Orth bde38e9ed4
CHANGELOG.md: add notes about abstracts 2023-02-13 10:39:03 +03:00
Alan Orth 8db1e36a6d
csv_metadata_quality/app.py: skip abstract in separator check
Also skip abstract in the separator check, since it's rare to have
any "|" here, but more likely that if one is present then it's for
a reason.
2023-02-13 10:37:33 +03:00
Alan Orth fbb625be5c
Ignore common non-SPDX licenses
This is meant to catch licenses that are supposed to be SPDX but
aren't, not licenses that *aren't* supposed to be SPDX. We have so
many free-text license descriptions like "Copyrighted" and "Other"
that I'm sick of seeing warnings for them!
2023-02-07 17:01:56 +03:00
Alan Orth 084b970798
CHANGELOG.md: add note about abstract field 2023-02-07 16:52:34 +03:00
Alan Orth 171b35b015
Add data/abstract-check.csv
A test file with several whitespace and newline scenarios in the
abstract. I am currently disabling whitespace/newline fixes in the
abstract because they are too agressive.
2023-02-07 16:50:47 +03:00
Alan Orth 545bb8cd0c
csv_metadata_quality/app.py: disable whitespace on abstracts
It's too aggressive on abstracts. If people paste in text from a
PDF there are often newlines, and most of the time this is what
they want.
2023-02-07 16:48:40 +03:00
Alan Orth d5afbad788
Update requirements
continuous-integration/drone/push Build is failing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2023-01-24 14:18:19 +03:00
Alan Orth d40c9ed97a
poetry.lock: run poetry update 2023-01-24 14:17:44 +03:00
Alan Orth c4a2ee8563
CHANGELOG.md: add note about fix.separators() 2023-01-24 14:16:23 +03:00
Alan Orth 3596381d03
csv_metadata_quality/app.py: separators fix
Don't run the invalid separators fix on title fields because some
items use "|" in the title to indicate something like a subtitle.

For example:

    Progress Review and Work Planning Meeting | Day 1
2023-01-24 14:13:55 +03:00
Alan Orth 5abd32a41f
CHANGELOG.md: run poetry update 2022-12-20 15:09:58 +02:00
Alan Orth 0ed0fabe21
tests/test_check.py: remove local variables
This was raised by ruff.

> F841 Local variable `result` is assigned to but never used

We don't actually need the output of the function since these tests
capture the stdout.
2022-12-20 15:09:20 +02:00
Alan Orth d5cfec65bd
tests/test_check.py: fix logic in assert
This was raised by ruff.

> E711 Comparison to `None` should be `cond is None`
2022-12-20 15:07:41 +02:00
Alan Orth 66893753ba
Move isort config to pyproject.toml
See: https://pycqa.github.io/isort/docs/configuration/black_compatibility.html
2022-12-20 15:03:10 +02:00
Alan Orth 57be05ebb6
poetry.lock: run poetry update 2022-12-20 14:59:35 +02:00
Alan Orth 8c23382b22
Update requirements
continuous-integration/drone/push Build is failing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2022-12-13 10:47:16 +03:00
Alan Orth f640161d87
CHANGELOG.md: add notes about SPDX and Python 2022-12-13 10:45:36 +03:00
Alan Orth e81ae93bf0
poetry.lock: run poetry update 2022-12-13 10:44:06 +03:00
Alan Orth 50ea5863dd
.drone.yml: only test on Python 3.9+ 2022-12-13 10:43:18 +03:00
Alan Orth 2dfb073b6b
Update minimum Python version to 3.9
Due to importlib.resources.files. It's a very minor thing and there
are ways to use back-ported third-party modules with this function-
ality, but I'm the only one use this so...

See: https://docs.python.org/3/library/importlib.resources.html#importlib.resources.files
2022-12-13 10:41:32 +03:00
Alan Orth 7cc49b500d
Use licenses.json from SPDX instead of spdx-license-list
spdx-license-list has been deprecated[1] and already has outdated
information compared to recent SPDX data releases. Now I use the
JSON license data directly from SPDX[2] (currently version 3.19).

The JSON file is loaded from the package's data directory using
Python 3's stdlib functions from importlib[3], though we now need
Python 3.9 as a minimum for importlib.resources.files[4].

Also note that the data directory is not properly packaged via
setuptools, so this only works for local installs, and not via
versions published to pypi, for example (I'm currently not doing
this anyways). If I want to publish this in the future I will
need to modify setup.py/pyproject.toml to include the data files.

[1] https://gitlab.com/uniqx/spdx-license-list
[2] https://github.com/spdx/license-list-data/blob/main/json/licenses.json
[3] https://copdips.com/2022/09/adding-data-files-to-python-package-with-setup-py.html
[4] https://docs.python.org/3/library/importlib.resources.html#importlib.resources.files
2022-12-13 10:39:17 +03:00
Alan Orth 051777bcec
Ignore subregion field for missing region checks
continuous-integration/drone/push Build is passing Details
Due to a sloppy regex I was sometimes matching the subregion field
when checking for missing UN M.49 regions in the region field.
2022-12-07 23:18:47 +01:00
Alan Orth 58e956360a
Add tests/test_check.py: fix test
continuous-integration/drone/push Build is passing Details
2022-11-28 22:12:17 +03:00
Alan Orth 3532175748
.drone.yml: install git
continuous-integration/drone/push Build is failing Details
Apparently the slim images don't come with git, which we need for
cloning some dependencies.
2022-11-28 22:05:34 +03:00
Alan Orth a7bc929af8
Update requirements
continuous-integration/drone/push Build is failing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2022-11-28 17:42:26 +03:00
Alan Orth 141b2e1da3
csv_metadata_quality/check.py: update region output
Add the country to the message about missing regions. This makes it
easier to see which country is triggering the missing region error,
and helps in case of debugging possible mistakes in the data coming
from the country_converter library.
2022-11-28 17:40:27 +03:00
Alan Orth 7097136b7e
Use my fork of country_converter again
There is an issue with the UN M.49 region for Myanmar.
2022-11-28 17:38:45 +03:00
Alan Orth d134c93663
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2022-11-28 17:16:09 +03:00
Alan Orth 9858406894
poetry.lock: run poetry update 2022-11-28 17:15:19 +03:00
Alan Orth b02f1f65ee
pyproject.toml: use upstream country_converter
Version 0.8.0 has the country and UN M.49 region fixes.

See: https://github.com/konstantinstadler/country_converter/releases/tag/v0.8.0
2022-11-28 17:14:16 +03:00
Alan Orth 4d5ef38dde
pyproject.toml: add ipython to dev dependencies 2022-11-28 17:11:18 +03:00
Alan Orth eaa8f31faf
Update requirements
continuous-integration/drone/push Build is failing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2022-11-08 10:22:39 +03:00
Alan Orth df57988e5a
Use my fork of pycountry
Until they update to iso-codes 4.12.0.

See: https://github.com/flyingcircusio/pycountry/pull/149
2022-11-08 10:21:28 +03:00
Alan Orth bddf4da559
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2022-11-08 10:06:26 +03:00
Alan Orth 15f52f8be8
Switch to my fork of country-converter
Until a few issues are resolved regarding new countries and regions.

See: https://github.com/konstantinstadler/country_converter/pull/122
See: https://github.com/konstantinstadler/country_converter/pull/123
2022-11-08 10:04:31 +03:00
Alan Orth bc909464c7
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2022-11-07 12:14:46 +03:00
Alan Orth 2d46259dfe
poetry.lock: run poetry update 2022-11-07 12:13:44 +03:00
Alan Orth ca82820a8e
pyproject.toml: update dependencies to latest 2022-11-07 12:13:28 +03:00
Alan Orth 86b4e5e182
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2022-11-01 12:21:41 +03:00
Alan Orth e5d5ae7e5d
poetry.lock: run poetry update 2022-11-01 12:20:43 +03:00
Alan Orth 8f3db86a36
CHANGELOG.md: fix header
continuous-integration/drone/push Build is passing Details
2022-10-31 11:43:14 +03:00
Alan Orth b0721b0a7a
.github: use ubuntu-22.04 for actions
continuous-integration/drone/push Build is passing Details
Apparently 'ubuntu-latest' is still 20.04 and today is 2022-10-03,
which seems a bit old!

See: https://github.com/actions/runner-images
2022-10-03 19:49:24 +03:00
Alan Orth 4e5faf51bd .github/workflows: use pip caching
See: https://github.com/actions/setup-python/blob/main/docs/advanced-usage.md#caching-packages
2022-10-03 19:39:52 +03:00
Alan Orth 5ea38d65bd .github/workflows: update actions
Update actions to latest versions:

- actions/checkout@v3
- actions/setup-python@v4
2022-10-03 19:39:52 +03:00
Alan Orth 58b7b6e9d8
Version 0.6.0
continuous-integration/drone/push Build is passing Details
2022-09-02 16:35:58 +03:00
Alan Orth ffdf1eca7b
setup.py: remove Python 3.7 support
I had already set the minimum to Python 3.8 elsewhere, but forgot
to do it here. I am not sure if Python 3.7 will still work here or
not so let's just keep it in sync with the other docs.
2022-09-02 16:34:16 +03:00
Alan Orth 59742e47f1
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2022-09-02 16:32:04 +03:00
Alan Orth 9c741b1d49
poetry.lock: sync latest deps 2022-09-02 16:31:19 +03:00
Alan Orth 21e9948a75
pyproject.toml: manually updated all deps
Update all deps to their latest versions on pypi.org and remove the
explicit dependency on SQLAlchemy.
2022-09-02 16:30:40 +03:00
Alan Orth f64435fc9d
tests/test_check.py: add missing excludes 2022-09-02 16:24:33 +03:00
Alan Orth 566c2b45cf
Remove Excel support
I never used this and it seems xlrd doesn't even support .xlsx any-
more anyways. If this was needed I could theoretically use openpyxl
but I'd rather just stick to CSV.
2022-09-02 16:14:24 +03:00
Alan Orth 41b813be6e
CHANGELOG.md: add not about exclude logic 2022-09-02 16:03:51 +03:00
Alan Orth 040e56fc76
Improve exclude function
When a user explicitly requests that a field be excluded with -x we
skip that field in most checks. Up until now that did not include
the item-based checks using a transposed dataframe because we don't
know the metadata field names (labels) until we iterate over them.

Now the excludes are respected for item-based checks.
2022-09-02 15:59:22 +03:00
Alan Orth 1f76247353
csv_metadata_quality/app.py: rework exclude/skip
Instead of processing the excludes inside the for column loop we do
it once before and then only need to check if the current column is
in the list.
2022-09-02 10:35:04 +03:00
Alan Orth 2e489fc921
Add new data/test-geography.csv test file
continuous-integration/drone/push Build is passing Details
This file has metadata to test different scenarios related to chec-
king and fixing missing regions.
2022-09-01 16:57:29 +03:00
Alan Orth 117c6ca85d
csv_metadata_quality/check.py: missing region fixes
Port over the recent fixes and logic improvements to regions from
fix.py.
2022-09-01 16:38:35 +03:00
Alan Orth f49214fa2e
csv_metadata_quality/fix.py: fix bug in regions
We need to make sure we're only manipulating the regions if we have
any missing. The previous code was always manipulating the existing
row, even when there were no missing regions, which resulted in new
values like "Eastern Africa||".
2022-09-01 16:15:32 +03:00
Alan Orth 7ce20726d0
csv_metadata_quality/fix.py: minor change
Print missing regions when we know they are missing, instead of do-
ing another check later and looping over them again.
2022-09-01 16:03:49 +03:00
Alan Orth 473be5ac2f
csv_metadata_quality/fix.py: don't add "not found" region
country_converter returns the literal "not found" string if a coun-
try cannot be found. In that case we do not want to consider that as
a region!
2022-09-01 15:46:21 +03:00
Alan Orth 7c61cae417 csv_metadata_quality/fix.py: silence warning
By default country_converter prints "not found in regex" if a coun-
try is not found. We can silence this by switching the logging lev-
el to something above WARNING.
2022-09-01 15:44:50 +03:00
Alan Orth ae16289637
csv_metadata_quality/fix.py: Minor change
The country_converter documentation says we should instantiate the
CountryConverter() class once instead of calling coco.convert() in
each iteration of the loop so we don't end up loading the data file
more than once.
2022-09-01 15:40:45 +03:00
Alan Orth fdb7900cd0
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --with dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==
2022-09-01 11:21:10 +03:00
Alan Orth 9c65569c43
poetry.lock: run poetry update 2022-09-01 08:44:12 +03:00
Alan Orth 0cf0bc97f0
csv_metadata_quality/fix.py: fix logic error again
continuous-integration/drone/push Build is passing Details
It seems there was another logic error raised by the test in pytest.
With my real data, it was enough to check if the region column was
None, but with my test I was explicitly setting the region to "" (an
empty string). So to be really sure we should check if the string
is not None *and* if its length is greater than 0.
2022-08-03 20:51:14 +03:00
Alan Orth 40c3585bab
csv_metadata_quality/fix.py: fix logic error
Fix string concatenation with existing regions.
2022-08-03 18:26:08 +03:00
Alan Orth b9c44aed7d
csv_metadata_quality/fix.py: fix logic issue
continuous-integration/drone/push Build is passing Details
Forgot to return the row as-is if we don't find any countries.
2022-08-02 10:17:30 +03:00
Alan Orth 032a1db392
README.md: Add note about missing regions
continuous-integration/drone/push Build is passing Details
2022-07-28 16:58:01 +03:00
Alan Orth da87531779
CHANGELOG.md: Add note about adding missing regions 2022-07-28 16:54:05 +03:00
Alan Orth 689ee184f7
Add unsafe check to add missing regions 2022-07-28 16:52:43 +03:00
Alan Orth 344993370c
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2022-07-08 15:50:42 +03:00
Alan Orth 00b4dca185
poetry.lock: run poetry update 2022-07-08 15:50:03 +03:00
Alan Orth 5a87bf4317
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2022-03-21 14:37:38 +03:00
Alan Orth c706719d8b
poetry.lock: run poetry update 2022-03-21 14:37:03 +03:00
Alan Orth e7ea8ef9f0
README.md: add note about spdx-license-list
continuous-integration/drone/push Build is passing Details
This Python module was deprecated in favor of using the SPDX license
data directly.

See: https://github.com/spdx/license-list-data
2022-01-30 13:27:20 +03:00
Alan Orth ea050376fc
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2022-01-30 13:26:37 +03:00
Alan Orth 4ba615cd41
poetry.lock: run poetry update 2022-01-30 13:26:04 +03:00
Alan Orth b0d46cd864
pyproject.toml: update black
It's no longer in beta!
2022-01-30 13:22:47 +03:00
Alan Orth 3ee9319d84
pyproject.toml: bump flake8 2022-01-30 13:21:09 +03:00
Alan Orth 4d5f4b5abb
pyproject.toml: update pycountry
Seems to be a few major versions from 19.x.x to 21.x.x. All tests
passing in pytest so it's probably fine.
2022-01-30 13:15:38 +03:00
Alan Orth 98d38801fa
pyproject.toml: update requests and requests-cache 2022-01-30 13:11:01 +03:00
Alan Orth dad7a8765c
.github/workflows/python-app.yml: use Python 3.10
That's what I use for testing locally. Note that we need to quote
the version here because otherwise GitHub Actions will interpret it
as 3.1 due to how YAML works.
2022-01-30 13:06:51 +03:00
Alan Orth d126304534
README.md: update note about Python version 2022-01-30 13:05:36 +03:00
Alan Orth 38c2584863
.drone.yml: don't test on Python 3.7 anymore
Pandas 1.4.0 has a minimum Python requirement of 3.8.

See: https://pandas.pydata.org/docs/whatsnew/v1.4.0.html
2022-01-30 13:04:52 +03:00
Alan Orth e94a4539bf
pyproject.toml: bump Pandas to v1.4.0
As of Pandas v1.4.0 the minimum Python version is 3.8.

See: https://pandas.pydata.org/docs/whatsnew/v1.4.0.html
2022-01-30 13:03:56 +03:00
Alan Orth a589d39e38
poetry.lock: run poetry lock 2022-01-29 16:26:16 +03:00
Alan Orth d9e427a80e
pyproject.toml: don't install ipython
It always complains about running in a virtual environment anyways,
and I can use the one from the OS instead.
2022-01-29 16:25:58 +03:00
Alan Orth 8ee5e2e306
setup.py: denote that Python 3.10 works
continuous-integration/drone/push Build is passing Details
I have been using Python 3.10 for months, and already added it to
the CI builds.
2022-01-29 16:08:01 +03:00
Alan Orth 490701f244
Run more CLI tests in CI
continuous-integration/drone/push Build is passing Details
2021-12-24 14:47:25 +02:00
Alan Orth e1b270cf83
CHANGELOG.md: add note about dropping invalid AGROVOC values
continuous-integration/drone/push Build is passing Details
2021-12-23 12:47:42 +02:00
Alan Orth b7efe2de40
data/test.csv: update invalid AGROVOC entry
Now that we can drop invalid AGROVOC values we should have a valid
value and an invalid value here. Depending on how the checker is
invoked we will either print a warning or drop the invalid value.
2021-12-23 12:45:38 +02:00
Alan Orth c43095139a
tests/test_check.py: add tests for dropping invalid AGROVOC 2021-12-23 12:44:32 +02:00
Alan Orth a7727b8431
Add support for dropping invalid AGROVOC terms
Requires --agrovoc-fields <field.name> to do the actual validation,
and -d to drop invalid ones.
2021-12-23 12:43:55 +02:00
Alan Orth 7763a021c5
csv_metadata_quality/fix.py: sort imports with isort
continuous-integration/drone/push Build is passing Details
2021-12-15 23:15:02 +02:00
Alan Orth 3c12ef3f66
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-12-15 23:11:44 +02:00
Alan Orth aee2438e94
poetry.lock: run poetry update 2021-12-15 23:10:27 +02:00
Alan Orth a351ba9706
CHANGELOG.md: add notes about ftfy 2021-12-15 22:09:01 +02:00
Alan Orth e4faf114dc
csv_metadata_quality/util.py: update for ftfy 6.0
The sequence_weirdness() heuristic is deprecated. Now we should use
is_bad().

See: https://ftfy.readthedocs.io/en/v6.0/heuristic.html
See: https://github.com/rspeer/python-ftfy/blob/master/CHANGELOG.md#version-60-april-2-2021
2021-12-15 21:58:07 +02:00
Alan Orth ff49a80432
csv_metadata_quality/fix.py: configure ftfy
Don't replace smart quotes in ftfy. If our text has them we should
keep them.
2021-12-15 21:51:51 +02:00
Alan Orth 8b15154285
pyproject.toml: use ftfy 6.0
Lots of improvements here! Improvements to heuristics and a new way
to configure which fixes get applied.

See: https://github.com/rspeer/python-ftfy/blob/master/CHANGELOG.md#version-60-april-2-2021
2021-12-15 21:48:56 +02:00
Alan Orth 5854f8e865
CHANGELOG.md: add note about unnecessary Unicode 2021-12-15 13:56:31 +02:00
Alan Orth e7322efadd
csv_metadata_quality/app.py: move unnecessary Unicode fix
We actually want to do this after we try to fix mojibake with ftfy.
These "unnecessary" Unicode characters could actually help ftfy in
some cases because often times they indicate that some character
from another encoding was there before (like an accent, dash, or
smart quote).
2021-12-15 13:53:25 +02:00
Alan Orth 95015febbd
csv_metadata_quality/fix.py: fix thin spaces
continuous-integration/drone/push Build is passing Details
Replace thin spaces with normal spaces. Sometimes I see these get
mis handled on Windows machines and they end up as "?" or so.
2021-12-09 23:22:53 +02:00
Alan Orth cef6c66b30
CHANGELOG.md: start next changes 2021-12-09 23:21:58 +02:00
Alan Orth 9905e183ea
Bump version to 0.6.0-dev 2021-12-09 23:21:30 +02:00
Alan Orth cc34db7ff8
Version 0.5.0
continuous-integration/drone/push Build is passing Details
2021-12-08 15:29:46 +02:00
Alan Orth b79e07b814
CHANGELOG.md: Add note about countries without regions 2021-12-08 15:21:45 +02:00
Alan Orth 865b950c33
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-12-08 15:20:22 +02:00
Alan Orth 6f269ca6b1
poetry.lock: run poetry update 2021-12-08 15:19:49 +02:00
Alan Orth 120e8cf09f
tests/test_check.py: add checks for countries without regions 2021-12-08 15:18:50 +02:00
Alan Orth a4eb79f625
data/test.csv: add data for countries without regions check 2021-12-08 15:17:55 +02:00
Alan Orth ccc2a73456
Add check for countries without matching regions
If we have country "Kenya" we should have region "Eastern Africa"
according to the UN M.49 geolocation scheme.
2021-12-08 15:02:20 +02:00
Alan Orth ad33195ba3
README.md: adjust intro
continuous-integration/drone/push Build is passing Details
Makes the badges not wrap and looks better in my opinion.
2021-12-08 11:36:34 +02:00
Alan Orth 72fe38972e
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-12-05 16:29:37 +02:00
Alan Orth 04232d0ede
poetry.lock: run poetry update 2021-12-05 16:29:09 +02:00
Alan Orth f5fa33bbc6
CHANGELOG.md: add title in citation note 2021-12-05 16:23:39 +02:00
Alan Orth 1b978159c1
data/text.csv: Add data for title in citation test 2021-12-05 16:23:06 +02:00
Alan Orth 4d5696c4cb
csv_metadata_quality/check.py: update title in citation check
Initialize the titles and citations before the for loop so we can
access them later. This makes it easier to check if the item actua-
lly has a citation.
2021-12-05 16:21:44 +02:00
Alan Orth e02678cd7c
tests/test_check.py: add tests for title in citation 2021-12-05 16:01:11 +02:00
Alan Orth 01b4354a14
tests/test_check.py: fix comment 2021-12-05 15:58:25 +02:00
Alan Orth 3b40a68279
Add check for title in citation
This checks if the item title exists in the citation. If it is not
present it could just be missing, or could have minor differences
in the whitespace, accents, etc.
2021-12-05 15:52:42 +02:00
Alan Orth 999cc65097
csv_metadata_quality/app.py: adjust mojibake check
If unsafe fixes (-u) are enabled then we don't need to do the check
first before actually fixing them. Doing the check first creates e-
tra output that needs to be reviewed by the user.
2021-12-05 15:18:35 +02:00
Alan Orth a7c3be280d
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-11-27 12:26:21 +02:00
Alan Orth 69f68e0a72
poetry.lock: Run poetry update 2021-11-27 12:25:40 +02:00
Alan Orth c941a90944 .drone.yml: Test on Python 3.10
continuous-integration/drone/push Build is passing Details
2021-10-11 20:09:32 +03:00
Alan Orth c95261f522
CHANGELOG.md: Add note about fix.newlines
continuous-integration/drone/push Build is passing Details
2021-10-08 14:37:12 +03:00
Alan Orth 787fa9e8d9
Add field name to fix.newlines output 2021-10-08 14:36:43 +03:00
Alan Orth 82261f7fe0
tests/test_check.py: Run black
continuous-integration/drone/push Build is passing Details
2021-10-06 22:10:26 +03:00
Alan Orth 8a27fb2589
Add check for missing DOIs
continuous-integration/drone/push Build is passing Details
Sometimes an editor includes a DOI in the citation field, but does
not add a standalone DOI field.
2021-10-06 21:25:39 +03:00
Alan Orth 831ce979c3
CHANGELOG.md: Clarify regex fixes 2021-10-06 21:23:35 +03:00
Alan Orth 58ef62fbcd
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-10-06 21:20:35 +03:00
Alan Orth 8c59f57e76
poetry.lock: Run poetry update 2021-10-06 21:19:54 +03:00
Alan Orth 72dd3e7272
CHANGELOG.md: Add notes about regexes 2021-10-06 19:35:59 +03:00
Alan Orth 6ba16d5d4c
csv_metadata_quality/check.py: Fix duplicate checker
Fix the incorrect type field regex, and improve the title regex to
consider dcterms.title and dc.title (along with the DSpace language
variants like dc.title[en_US]), but ignore dc.title.alternative.

See: https://regex101.com/r/I4m06F/1
2021-10-06 19:32:40 +03:00
Alan Orth 81069259ba
CHANGELOG.md: Add note about bibliographicCitation
continuous-integration/drone/push Build is passing Details
2021-10-06 16:16:51 +03:00
Alan Orth 54ab869297
csv_metadata_quality/experimental.py: Adjust citation match
We need to match both of these citation fields:

- dc.identifier.citation
- dcterms.bibliographicCitation
2021-10-06 16:13:10 +03:00
Alan Orth 22b359c8a8
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-09-27 14:15:01 +03:00
Alan Orth 3e06788d88
poetry.lock: Run poetry update 2021-09-27 14:11:21 +03:00
Alan Orth 3c41cc283f
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-09-06 21:04:05 +03:00
Alan Orth 5741e94571
poetry.lock: Run poetry update 2021-09-06 21:03:30 +03:00
Alan Orth 215d61c188
pyproject.toml: limit SQLAlchemy to < 1.4.23
SQLAlchemy gets pulled in by csvkit's agate-sql dependency and there
is currently an issue with Poetry's parsing of the SQLAlchemy 1.4.23
constraints. Temporarily explicitly install a version of SQLAlchemy
that works (can remove later once Poetry fixes this). Anyways, I am
not using any SQLAlchemy features that I know of.

See: https://github.com/python-poetry/poetry/issues/4402
2021-09-06 21:01:09 +03:00
Alan Orth 11ddde3327
data/test.csv: Update mojibake example
continuous-integration/drone/push Build is passing Details
I was trying to find where I got this one and it seems to have been
the other way around. Doesn't matter here only that I was curious.
2021-08-19 15:48:41 +03:00
Alan Orth a347878d43
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-08-12 21:49:36 +03:00
Alan Orth a89bc331f0
poetry.lock: Run poetry update
Lots of minor dependencies updates. All tests still passing with
pytest.
2021-08-12 21:47:46 +03:00
Alan Orth af3493c724
CITATION.cff: Remove YAML formatting
continuous-integration/drone/push Build is passing Details
GitHub says it can't parse my CITATION.cff file. The example in the
docs shows version 1.2.0 also, I wonder if that's relevant.

See: https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-repository-on-github/about-citation-files
2021-07-28 21:23:30 +03:00
Alan Orth 52644bf83e
Add CITATION.cff
Created with the cffinit tool:

https://citation-file-format.github.io/cff-initializer-javascript/
2021-07-28 21:11:11 +03:00
Alan Orth c8f5539d21
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-07-06 15:47:44 +03:00
Alan Orth 382d0d6aed
Run poetry update 2021-07-06 15:37:57 +03:00
Alan Orth b8f4be9ebb
pyproject.toml: Update pytest-clarity and black
These seem to have much newer versions that didn't get updated in
this project due to the version pinning selector I was using with
poetry.

In the case of pytest-clarity the previous version was 0.3.1 and
the version selector was a caret (^), which will never update the
left-most (major) number. Now they seem to be on 1.x.x so it will
be OK in the future.

In the case of black, they use weird numbering so it's anyone's
guess how this will work! Luckily it's only used for linting and
formatting.
2021-07-06 15:30:41 +03:00
Alan Orth 4e2eab68b0
Update requests-cache
Apparently we were stuck on an older version of requests-cache due
to the fact that we were using the caret, which will never update
the left-most (major) version. Upstream requests-cache is currently
version 0.6.4, and there seems to have been some changes to the API.
2021-07-06 15:24:39 +03:00
Alan Orth 55165cb4ce
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-06-14 12:52:47 +03:00
Alan Orth 93d3eabfba
poetry.lock: Run poetry update 2021-06-14 12:52:28 +03:00
Alan Orth a8fe623f4c
csv_metadata_quality/check.py: Remove unnecessary pass
continuous-integration/drone/push Build is passing Details
LGTM warned that these pass statements are not necessary.

See: https://lgtm.com/rules/910088/
2021-04-20 08:20:13 +03:00
Alan Orth dbc0437d59
CHANGELOG.md: Add note about Python deps
continuous-integration/drone/push Build is passing Details
2021-04-14 16:16:02 +03:00
Alan Orth 96ce1daa90
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have
their versions pinned with ==.
2021-04-14 16:15:28 +03:00
Alan Orth 3adb52d7c0
poetry.lock: Run poetry update 2021-04-14 16:14:37 +03:00
Alan Orth f958d1879f
poetry.lock: Run poetry update
continuous-integration/drone/push Build is passing Details
2021-04-02 16:19:16 +03:00
Alan Orth bd8943f36a
csv_metadata_quality/app.py: Don't crash if fields are missing
continuous-integration/drone/push Build is passing Details
We don't need to crash if someone feeds us a CSV file that is miss-
ing commont DSpace fields like title, type, and subject.
2021-03-21 19:47:29 +02:00
Alan Orth 28f9026286
README.md: Minor edit
continuous-integration/drone/push Build is passing Details
2021-03-19 16:26:31 +02:00
Alan Orth cfe09f7126
Add SPDX short license identifier to all Python files
See: https://spdx.github.io/spdx-spec/appendix-V-using-SPDX-short-identifiers-in-source-files/
2021-03-19 16:04:40 +02:00
Alan Orth 8eddb76aab
Bump version to 0.4.8-dev
continuous-integration/drone/push Build is passing Details
2021-03-19 11:53:56 +02:00
Alan Orth a04dbc50db
Add notes about checking and fixing mojibake 2021-03-19 11:48:27 +02:00
Alan Orth 28335ed159
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have
their versions pinned with ==.
2021-03-19 10:29:15 +02:00
Alan Orth 773a0a2695
poetry.lock: Run poetry update 2021-03-19 10:28:55 +02:00
Alan Orth 39a4b1a487
Add mojibake to data/test.csv and tests 2021-03-19 10:28:33 +02:00
Alan Orth 898bb412c3
Add checks and unsafe fixes for mojibake
This detects whether text has likely been encoded in one encoding
and decoded in another, perhaps multiple times. This often results
in display of "mojibake" characters.

For example, a file encoded in UTF-8 is opened as CP-1252 (Windows
Latin codepage) in Microsoft Excel, and saved again as UTF-8. You
will see strings like this in the resulting file:

    - CIAT Publicaçao
    - CIAT Publicación

The correct version of these in UTF-8 would be:

    - CIAT Publicaçao
    - CIAT Publicación

I use a code snippet from Martijn Pieters on StackOverflow to de-
tect whether a string is "weird" as determined by the excellent
"fixes text for you" (ftfy) Python library, then check if a weird
string encodes as CP-1252 or not. If so, I can try to fix it.

See: https://stackoverflow.com/questions/29071995/identify-garbage-unicode-string-using-python
2021-03-19 10:22:21 +02:00
Alan Orth e92ec5d371
README.md: Add note about duplicate checking
continuous-integration/drone/push Build is passing Details
2021-03-17 10:12:03 +02:00
Alan Orth f816e17fe7
Version 0.4.7
continuous-integration/drone/push Build is passing Details
2021-03-17 10:00:34 +02:00
Alan Orth 9061c7c79b
setup.py: Remove beta tag
I think this is only used by pypi.org?
2021-03-17 10:00:09 +02:00
Alan Orth 661d05b977
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have
their versions pinned with ==.
2021-03-17 09:58:35 +02:00
Alan Orth 652b7ea98c
CHANGELOG.md: Add note about poetry dependencies 2021-03-17 09:58:02 +02:00
Alan Orth 65da6e9b05
poetry.lock: Run pipenv update 2021-03-17 09:57:31 +02:00
Alan Orth a313b7527a
CHANGELOG.md: Add note about duplicate items 2021-03-17 09:55:07 +02:00
Alan Orth 51ee370697
data/test.csv: Add duplicate item 2021-03-17 09:54:14 +02:00
Alan Orth e8422bfa74
tests/test_check.py: Add test for duplicate items 2021-03-17 09:54:02 +02:00
Alan Orth 9f2dc0a0f5
Add support for detecting duplicate items
This uses the title, type, and date issued as a sort of "key" when
determining if an item already exists in the data set.
2021-03-17 09:53:07 +02:00
Alan Orth 14010896a5
csv_metadata_quality/experimental.py: Move all imports to top of file
continuous-integration/drone/push Build is passing Details
PEP8 recommends keeping imports at the top of the file. Also, I had
to re-work the issn/isbn so they didn't conflict with the functions
in check.py (flake8 warned about them being redefined).

Imports sorted with isort.

See: https://www.python.org/dev/peps/pep-0008/#imports
2021-03-16 16:13:34 +02:00
Alan Orth ab3af2ec62
csv_metadata_quality/check.py: Reformat with black 2021-03-16 16:12:33 +02:00
Alan Orth 1aa2084230
CHANGELOG.md: Add note about checks 2021-03-16 16:11:24 +02:00
Alan Orth 330a7b7b9c
Don't unnecessarily rewrite DataFrames for checks
By using df[column] = df[column].apply(check...) we were re-writing
the DataFrame every time we returned from a check. We don't actuall
y need to return a value at all, as the point of checks is to print
a warning to the screen. In Python a "return" statement without a v
ariable returns None.

I haven't measured the impact of this, but I assume it will mean we
are faster and use less memory.
2021-03-16 16:04:19 +02:00
Alan Orth 9a5e3fd6ef
README.md: Add TODO about detecting duplicates 2021-03-16 14:03:26 +02:00
Alan Orth ed084da08c
CHANGELOG.md: Add note about multi-value separators
continuous-integration/drone/push Build is passing Details
2021-03-14 21:04:19 +02:00
Alan Orth 10612cf891
Remove checks for invalid multi-value separators
Now that I no longer treat the fix for these as "unsafe" I don't a
ctually need to check for them—I can just fix them when I see them.
2021-03-14 21:01:21 +02:00
Alan Orth 3656e9f976
Update CI workflows to use DCTERMS instead of DC
continuous-integration/drone/push Build is passing Details
2021-03-14 15:52:51 +02:00
Alan Orth c9c277f8df
csv_metadata_quality/app.py: Update help text
continuous-integration/drone/push Build is passing Details
Use DCTERMS fields where possible.
2021-03-14 10:52:58 +02:00
Alan Orth fb35afd937
CHANGELOG.md: Add note about requests cache 2021-03-14 09:13:51 +02:00
Alan Orth 0e9176f0a6
csv_metadata_quality/check.py: requests cache
Allow overriding the directory for the requests cache. In the case
of csv-metadata-quality-web, which currently runs on Google's App
Engine, we can only write to /tmp.
2021-03-14 09:07:35 +02:00
Alan Orth 1008acf35e
Always fix invalid multi-value separators
continuous-integration/drone/push Build is passing Details
This is no longer class-ified as "unsafe" as I have yet to see a
case where this was intentional, and it always causes issues when
you import the data in a DSpace repository.
2021-03-13 12:59:45 +02:00
Alan Orth f00a07e2cd
README.md: Reorganize unsafe functionality
continuous-integration/drone/push Build is passing Details
2021-03-13 11:56:52 +02:00
Alan Orth 46098861ed
poetry.lock: Run poetry update
continuous-integration/drone/push Build is passing Details
2021-03-11 22:45:32 +02:00
Alan Orth fa84cfa440
Bump version to 0.4.6-dev 2021-03-11 22:44:36 +02:00
Alan Orth 6cc1401f88
pyproject.toml: Minimum Python is technically 3.7.1
continuous-integration/drone/push Build is passing Details
See: https://pandas.pydata.org/pandas-docs/stable/whatsnew/v1.2.0.html
2021-03-11 13:41:58 +02:00
Alan Orth ad2cda8a41
README.md: Add note about SPDX license identifiers
continuous-integration/drone/push Build is passing Details
2021-03-11 12:21:34 +02:00
Alan Orth dc6920802e
.github/workflows/python-app.yml: Use Python 3.9
I now use this version in my development environment. Eventually I
should add a matrix of versions to use, but I don't know the GitHub
Actions syntax well enough yet.
2021-03-11 12:17:57 +02:00
Alan Orth 6ca449d8ed
README.md: Update note about Python 3.8 to 3.8+
Currently the lower bound on Python version support is 3.7 because
of Pandas 1.2.0 requiring it, but I use 3.9 on my development box.
2021-03-11 12:16:07 +02:00
Alan Orth 1554cfd5c9
Version 0.4.6 2021-03-11 12:14:54 +02:00
Alan Orth 00b8faad6d
CHANGELOG.md: Fix headers 2021-03-11 12:13:22 +02:00
Alan Orth b19d81abdd
.drone.yml: We need some stuff to build pyicu now
continuous-integration/drone/push Build is passing Details
2021-03-11 12:07:28 +02:00
Alan Orth a0ea829f5c
csv_metadata_quality/fix.py: Fixes should be green 2021-03-11 11:47:24 +02:00
Alan Orth 0089efa914
tests/test_check.py: Use dcterms.subject instead of dc.subject
Trying to move some old DC fields to DCTERMS.
2021-03-11 11:45:25 +02:00
Alan Orth 3dbe656f9f
Update requirements
continuous-integration/drone/push Build is failing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have
their versions pinned with ==.
2021-03-11 11:11:19 +02:00
Alan Orth 7ad821dcad
CHANGELOG.md: Add note about poetry dependencies 2021-03-11 11:10:27 +02:00
Alan Orth cd876c4fb3
poetry.lock: Run poetry update 2021-03-11 11:10:02 +02:00
Alan Orth d88ea56488
csv_metadata_quality/check.py: Move all imports to top of file
PEP8 recommends keeping imports at the top of the file. Also, I had
to re-work the issn/isbn so they didn't conflict with the functions
in check.py (flake8 warned about them being redefined).

Imports sorted with isort.

See: https://www.python.org/dev/peps/pep-0008/#imports
2021-03-11 10:52:20 +02:00
Alan Orth e0e3ca6c58
CHANGELOG.md: Add notes about DCTERMS in data/test.csv 2021-03-11 10:50:52 +02:00
Alan Orth abae8ca4fb
data/test.csv: Move some DC fields to DCTERMS
The original Dublin Core elements set was superceded by DCTERMS in
2008 and we have started using them in our DSpace repository so I
think it's good to update them in our test data. Old DC fields are
still checked and fixed in this tool, though.

It's worth nothing that currently supported DSpace versions (4, 5,
and 6) all have hard-coded a few fields like dc.title internally so
we can't migrate those to their DCTERMS counterparts just yet.
2021-03-11 10:49:05 +02:00
Alan Orth d7d4d4efca
CHANGELOG.md: Add note about SPDX license identifiers 2021-03-11 10:37:27 +02:00
Alan Orth 5318953150
tests/test_check.py: Add tests for licenses 2021-03-11 10:36:26 +02:00
Alan Orth 3b17914002
data/test.csv: Add invalid SPDX license
Now we are checking dcterms.license against the list of SPDX license
identifiers using https://pypi.org/project/spdx-license-list/.
2021-03-11 10:34:58 +02:00
Alan Orth 6e4b0e5c1b
Add validation of SPDX license identifiers
Currently this only checks the dcterms.license field and the result
will only be a warning.
2021-03-11 10:33:16 +02:00
Alan Orth b16fa9121f
pyproject.toml: Add csv-metadata-quality as a script
continuous-integration/drone/push Build is passing Details
For some reason I stopped having csv-metadata-quality available in
my poetry environment after install. It seems I need to add it as a
poetry tool script? I had already done this in setup.py years ago,
which works for regular python setup.py installs, but hadn't needed
to do it in poetry for a year or more that I've been using it, until
now.
2021-03-08 09:50:05 +02:00
Alan Orth 202bda862a
Bump version to 0.4.5
continuous-integration/drone/push Build is passing Details
2021-03-04 21:38:10 +02:00
Alan Orth 7479310ac0
setup.py: Bump version to 0.4.4
I missed to increase this when I actually released version 0.4.4 so
I will do it in a separate commit now before I bump the version to
0.4.5.
2021-03-04 21:35:08 +02:00
Alan Orth 98a91bc9c2
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-03-04 21:33:33 +02:00
Alan Orth fc5bedcc5c
CHANGELOG.md: Add poetry update 2021-03-04 21:32:46 +02:00
Alan Orth 44d12d771a
poetry.lock: Run poetry update 2021-03-04 21:32:21 +02:00
Alan Orth 4a7000e975
README.md: Add more ideas to do 2021-03-04 21:26:53 +02:00
Alan Orth 27b2d81ca8
CHANGELOG.md: Add note about dcterms.issued
continuous-integration/drone/push Build is passing Details
2021-02-28 15:14:39 +02:00
Alan Orth 91ebd0f606
README.md: Update TODOs
A few of these date things have been addressed.
2021-02-28 15:13:36 +02:00
Alan Orth dd2cfae047
csv_metadata_quality/app.py: Match dcterms.issued for dates
We used to only check fields that had "date" in their name because
we were using DSpace's default dc.date.* fields. Now we are using
dcterms.issued so I will add that one as well.
2021-02-28 15:11:06 +02:00
Alan Orth d76e72532a
Move unreleased changes to v0.4.4
continuous-integration/drone/push Build is passing Details
2021-02-21 13:25:22 +02:00
Alan Orth 13980d2dde
CHANGELOG.md: Add note about colored output 2021-02-21 13:12:26 +02:00
Alan Orth 9aaaa62461
Update requirements
continuous-integration/drone/push Build is passing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-02-21 13:10:52 +02:00
Alan Orth a7fc5a246c
Colorize output
continuous-integration/drone/push Build is failing Details
Messages will be colorized:

- Red for errors
- Yellow for warnings or information
- Green for fixes
2021-02-21 13:01:25 +02:00
Alan Orth 7fb8acb866
Add colorama for colored output
Red for errors, yellow for warnings or information, and green for
fixes.
2021-02-21 13:00:31 +02:00
Alan Orth 9f5d2c2c4f
poetry.lock: Run poetry update
continuous-integration/drone/push Build is passing Details
2021-02-15 15:13:12 +02:00
Alan Orth 202abf140c
CHANGELOG.md: Add note about poetry
continuous-integration/drone/push Build is passing Details
2021-02-04 21:48:12 +02:00
Alan Orth 0cd6d3dfe6
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running in CI:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-02-04 21:46:49 +02:00
Alan Orth a458beac55
poetry.lock: Run poetry update 2021-02-04 21:45:30 +02:00
Alan Orth e62ecb0a8f
CHANGELOG.md: Add note about new date format 2021-02-04 21:43:44 +02:00
Alan Orth de92f32ab6
csv_metadata_quality/check.py: More date formats
We should also allow ISO 8601 extended in combined date and time
format. DSpace does not have a problem with dates in this format
and I have found some metadata that uses this date format.

For example: 2020-08-31T11:04:56Z

See: https://en.wikipedia.org/wiki/ISO_8601
2021-02-04 21:39:14 +02:00
Alan Orth dbbbc0944a
README.md: Add handle to citation
continuous-integration/drone/push Build is passing Details
2021-01-27 10:33:37 +02:00
Alan Orth d17bf3033c
README.md: Add citation 2021-01-27 10:32:26 +02:00
Alan Orth 2ec52f1b73
README.md: Update description
continuous-integration/drone/push Build is passing Details
2021-01-26 15:43:41 +02:00
Alan Orth aa1abf15a7
README.md: Adjust title 2021-01-26 15:35:21 +02:00
Alan Orth cbf94490f2
Version 0.4.3 2021-01-26 15:22:40 +02:00
Alan Orth f3d0d5ef07
setup.py: Remove Python 3.6
I actually removed Python 3.6 support a few weeks ago after updating
to Pandas 1.2.0, but forgot to update this.
2021-01-26 15:22:08 +02:00
Alan Orth 4b7b99c94c
CHANGELOG.md: Add note about multi-value separators 2021-01-26 15:20:22 +02:00
Alan Orth df670e81b9
README.md: Use badge from my Drone CI
continuous-integration/drone/push Build is passing Details
I'm not using SourceHut anymore.
2021-01-26 14:38:50 +02:00
Alan Orth ae357d8c6c
Revert "Update requirements"
This reverts commit ca80340f7a.

Nope, we still need the --without-hashes because this still fails
on Python 3.7, but not 3.8 or 3.9. From looking around it seems
that nobody can agree whether poetry should handle this, pip should
handle it, or upstream projects should pin their dependencies.
2021-01-26 14:15:31 +02:00
Alan Orth ca80340f7a
Update requirements
continuous-integration/drone/push Build is failing Details
Generated with poetry export:

    $ poetry export -f requirements.txt > requirements.txt
    $ poetry export --dev -f requirements.txt > requirements-dev.txt

Trying to see if we no longer need --without-hashes since we don't
support Python 3.6 anymore.
2021-01-26 11:46:05 +02:00
Alan Orth cc1743b86d
Remove .build.yml
I will just use GitHub Actions and Drone.
2021-01-26 11:41:30 +02:00
Alan Orth bcb9885c6b
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running on Python 3.6 in Travis:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-01-26 10:36:48 +02:00
Alan Orth b484b75178
poetry.lock: Run poetry update 2021-01-26 10:36:04 +02:00
Alan Orth d3880a9dfa
Remove Python 3.6 support
continuous-integration/drone/push Build is passing Details
Pandas 1.2.0 apparently requires Python 3.7.1+.
2021-01-03 15:51:53 +02:00
Alan Orth 7edb8b19d7
tests/test_check.py: Reformat with black 2021-01-03 15:50:21 +02:00
Alan Orth a6709c7f82
Update requirements
continuous-integration/drone/push Build is failing Details
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running on Python 3.6 in Travis:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2021-01-03 15:42:00 +02:00
Alan Orth d489ea4609
poetry.lock: Run poetry update 2021-01-03 15:41:08 +02:00
Alan Orth 96634cbb67
pytest.ini: Change --strict to --strict-markers
This is deprecated since pytest 6.2.0.

See: https://docs.pytest.org/en/stable/deprecations.html#the-strict-command-line-option
2021-01-03 15:40:14 +02:00
Alan Orth 29e67a0887
Add tests for unnecessary multi-value separators 2021-01-03 15:37:18 +02:00
Alan Orth 32cea2055f
data/test.csv: Add unnecessary multi-value separator 2021-01-03 15:33:04 +02:00
Alan Orth 0dc66c5c4e
Expand check/fix for multi-value separators
I just came across some metadata that had unnecessary multi-value
separators at the end of a field, causing a blank value to be used.

For example: "Kenya||Tanzania||"
2021-01-03 15:30:03 +02:00
Alan Orth c26ad83534
.github: Test CLI invocation 2020-12-14 23:47:09 +02:00
Alan Orth 72ca9d99bf
setup.py: Add Python 3.9
[SKIP CI]
2020-12-14 23:44:35 +02:00
Alan Orth ae33a9b793
Add .drone.yml 2020-12-14 23:42:23 +02:00
Alan Orth fc0367bfc8
README.md: Update note about Python version 2020-12-08 10:52:24 +02:00
Alan Orth e33b285034
README.md: Add GitHub Actions badge 2020-12-08 10:48:31 +02:00
Alan Orth 349fca03b8
.github/workflows/python-app.yml: Rename
This name is displayed in the badge so it should be something more
relevant.
2020-12-08 10:46:39 +02:00
Alan Orth 52d8904870
Remove .travis.yml
They changed their free tier and I might as well use GitHub Actions
for ILRI stuff anyways.
2020-12-08 10:41:36 +02:00
Alan Orth 971c69e535
Create python-app.yml
Try GitHub Actions for Python 3.8 using GitHub's Python example.
2020-12-08 10:38:52 +02:00
Alan Orth f8cc233e25
.travis.yml: Use Amazon Graviton2 ARM environment
These are the new hotness and should have faster build times.

See: https://blog.travis-ci.com/2020-09-11-arm-on-aws
2020-12-06 10:49:03 +02:00
Alan Orth aa7b7a9592
Update requirements
Generated with poetry export:

    $ poetry export --without-hashes -f requirements.txt > requirements.txt
    $ poetry export --without-hashes --dev -f requirements.txt > requirements-dev.txt

I am trying `--without-hashes` to work around an error on pip install
when running on Python 3.6 in Travis:

    ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==.
2020-11-03 07:42:45 +02:00
Alan Orth 57b455bde7
poetry.lock: Run poetry update 2020-11-03 07:40:56 +02:00
Alan Orth 23b95fa368
.travis.yml: Use Ubuntu 20.04 "Focal" environment 2020-10-29 00:14:54 +03:00
Alan Orth 6985f76aa3
.travis.yml: Bump Python versions
Test Python 3.9 now that it was released, and allow tests to fail
on nightly builds.
2020-10-29 00:14:36 +03:00
Alan Orth 98a6a19e12
Update requirements-dev.txt
Generated with poetry export:

    $ poetry export --dev -f requirements.txt > requirements-dev.txt
2020-10-06 17:48:46 +03:00
Alan Orth f4914c414f
Only install ipython on Python 3.7+ 2020-10-06 17:48:16 +03:00
Alan Orth d352fe8017
Update requirements
Generated with poetry export:

    $ poetry export -f requirements.txt > requirements.txt
    $ poetry export --dev -f requirements.txt > requirements-dev.txt
2020-10-06 17:21:33 +03:00
Alan Orth f13c360084
Update poetry package dependencies 2020-10-06 17:20:16 +03:00
Alan Orth 7cfd4c0b59
csv_metadata_quality: Move scoped imports to global
According to PEP8 we should avoid scoped imports unless you have a
good reason. Here there are two cases where we do (issn and isbn),
but I will move the others to the global scope.
2020-10-06 17:11:39 +03:00
Alan Orth 826509ddcf
poetry.lock: Run poetry update
List of updated modules:

  - Updating numpy (1.19.1 -> 1.19.2)
  - Updating pygments (2.6.1 -> 2.7.1)
  - Updating pandas (1.1.1 -> 1.1.2)

All tests still pass according to pytest.
2020-09-26 12:18:23 +03:00
Alan Orth 22b5c0f7a1
CHANGELOG.md: Add note about dependencies update 2020-09-08 15:04:40 +03:00
Alan Orth 774e274b32
poetry.lock: Run poetry update
Update dependencies to latest version:

  - Updating attrs (19.3.0 -> 20.2.0)
  - Updating more-itertools (8.4.0 -> 8.5.0)
  - Updating openpyxl (3.0.4 -> 3.0.5)
  - Updating parso (0.7.0 -> 0.7.1)
  - Updating sqlalchemy (1.3.18 -> 1.3.19)
  - Updating urllib3 (1.25.9 -> 1.25.10)
  - Updating agate-dbf (0.2.1 -> 0.2.2)
  - Updating agate-sql (0.5.4 -> 0.5.5)
  - Updating jedi (0.17.1 -> 0.17.2)
  - Updating numpy (1.19.0 -> 1.19.1)
  - Updating prompt-toolkit (3.0.5 -> 3.0.7)
  - Updating regex (2020.6.8 -> 2020.7.14)
  - Updating traitlets (4.3.3 -> 5.0.4)
  - Updating ipython (7.16.1 -> 7.18.1)
  - Updating pandas (1.0.5 -> 1.1.1)
  - Updating python-stdnum (1.13 -> 1.14)

All tests still pass according to pytest.
2020-09-08 15:04:00 +03:00
Alan Orth db474a802f
README.md: Use badge from travis-ci.com 2020-08-04 11:12:28 +03:00
Alan Orth e241f8461b
CHANGELOG.md: Add notes 2020-07-06 14:10:46 +03:00
Alan Orth 431e6331c8
csv_metadata_quality/check.py: Format with black 2020-07-06 14:10:19 +03:00
29 changed files with 11407 additions and 1665 deletions

View File

@ -1,15 +0,0 @@
image: archlinux
packages:
- python-poetry
sources:
- https://git.sr.ht/~alanorth/csv-metadata-quality
tasks:
- setup: |
cd csv-metadata-quality
poetry install
- pytest: |
cd csv-metadata-quality
poetry run pytest
- testcli: |
cd csv-metadata-quality
poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv -e -u --agrovoc-fields dc.subject,cg.coverage.country

91
.drone.yml Normal file
View File

@ -0,0 +1,91 @@
---
kind: pipeline
type: docker
name: python311
steps:
- name: test
image: python:3.11-slim
commands:
- id
- python -V
- apt update && apt install -y gcc g++ libicu-dev pkg-config git
- python -m pip install poetry
- poetry install
- poetry run pytest
# Basic test
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv
# Basic test with unsafe fixes
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv -u
# Geography test
- poetry run csv-metadata-quality -i data/test-geography.csv -o /tmp/test.csv
# Geography test with unsafe fixes
- poetry run csv-metadata-quality -i data/test-geography.csv -o /tmp/test.csv -u
# Test with experimental checks
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv -e
# Test with AGROVOC validation
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv --agrovoc-fields dcterms.subject
# Test with AGROVOC validation (and dropping invalid)
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv --agrovoc-fields dcterms.subject -d
---
kind: pipeline
type: docker
name: python310
steps:
- name: test
image: python:3.10-slim
commands:
- id
- python -V
- apt update && apt install -y gcc g++ libicu-dev pkg-config git
- python -m pip install poetry
- poetry install
- poetry run pytest
# Basic test
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv
# Basic test with unsafe fixes
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv -u
# Geography test
- poetry run csv-metadata-quality -i data/test-geography.csv -o /tmp/test.csv
# Geography test with unsafe fixes
- poetry run csv-metadata-quality -i data/test-geography.csv -o /tmp/test.csv -u
# Test with experimental checks
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv -e
# Test with AGROVOC validation
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv --agrovoc-fields dcterms.subject
# Test with AGROVOC validation (and dropping invalid)
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv --agrovoc-fields dcterms.subject -d
---
kind: pipeline
type: docker
name: python39
steps:
- name: test
image: python:3.9-slim
commands:
- id
- python -V
- apt update && apt install -y gcc g++ libicu-dev pkg-config git
- python -m pip install poetry
- poetry install
- poetry run pytest
# Basic test
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv
# Basic test with unsafe fixes
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv -u
# Geography test
- poetry run csv-metadata-quality -i data/test-geography.csv -o /tmp/test.csv
# Geography test with unsafe fixes
- poetry run csv-metadata-quality -i data/test-geography.csv -o /tmp/test.csv -u
# Test with experimental checks
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv -e
# Test with AGROVOC validation
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv --agrovoc-fields dcterms.subject
# Test with AGROVOC validation (and dropping invalid)
- poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv --agrovoc-fields dcterms.subject -d
# vim: ts=2 sw=2 et

45
.github/workflows/python-app.yml vendored Normal file
View File

@ -0,0 +1,45 @@
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Build and Test
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- name: Install poetry
run: pipx install poetry
- uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'poetry'
- run: poetry install
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
poetry run flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
poetry run flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: poetry run pytest
- name: Test CLI
run: |
# Basic test
poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv
# Test with unsafe fixes
poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv -u
# Test with experimental checks
poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv -e
# Test with AGROVOC validation
poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv --agrovoc-fields dcterms.subject
# Test with AGROVOC validation (and dropping invalid)
poetry run csv-metadata-quality -i data/test.csv -o /tmp/test.csv --agrovoc-fields dcterms.subject -d

View File

@ -1,16 +0,0 @@
dist: bionic
language: python
python:
- "3.6"
- "3.7"
- "3.8"
- "3.8-dev" # 3.8 development branch
jobs:
allow_failures:
- python: "3.8-dev"
install:
- "pip install -r requirements.txt"
- "pip install -r requirements-dev.txt"
script: pytest
# vim: ts=2 sw=2 et

View File

@ -4,6 +4,143 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## Unreleased
### Added
- Ability to normalize DOIs to https://doi.org URI format
### Fixed
- Fixed regex so we don't run the invalid multi-value separator fix on
`dcterms.bibliographicCitation` fields
- Fixed regex so we run the comma space fix on `dcterms.bibliographicCitation`
fields
- Don't crash the country/region checker/fixer when a title field is missing
### Changed
- Don't run newline fix on description fields
- Install requests-cache in main run() function instead of check.agrovoc() function so we only incur the overhead once
- Use py3langid instead of langid, see: [How to make language detection with langid.py faster](https://adrien.barbaresi.eu/blog/language-detection-langid-py-faster.html)
### Updated
- Python dependencies, including Pandas 2.0.0 and [Arrow-backed dtypes](https://datapythonista.me/blog/pandas-20-and-the-arrow-revolution-part-i)
- SPDX license list
## [0.6.1] - 2023-02-23
### Fixed
- Missing region check should ignore subregion field, if it exists
### Changed
- Use SPDX license data from SPDX themselves instead of spdx-license-list
because it is deprecated and outdated
- Require Python 3.9+
- Don't run `fix.separators()` on title or abstract fields
- Don't run whitespace or newline fixes on abstract fields
- Ignore some common non-SPDX licenses
- Ignore `__description` suffix in filenames meant for SAFBuilder when checking
for uncommon file extensions
### Updated
- Python dependencies
## [0.6.0] - 2022-09-02
### Changed
- Perform fix for "unnecessary" Unicode characters after we try to fix encoding
issues with ftfy
- ftfy heuristics to use `is_bad()` instead of `sequence_weirdness()`
- ftfy `fix_text()` to *not* change “smart quotes” to "ASCII quotes"
### Updated
- Python dependencies
- Metadatata field exclude logic
### Added
- Ability to drop invalid AGROVOC values with `-d` when checking AGROVOC values
with `-a <field.name>`
- Ability to add missing UN M.49 regions when both country and region columns
are present. Enable with `-u` (unsafe fixes) for now.
### Removed
- Support for reading Excel files (both `.xls` and `.xlsx`) as it was completely
untested
## [0.5.0] - 2021-12-08
### Added
- Ability to check for, and fix, "mojibake" characters using [ftfy](https://github.com/LuminosoInsight/python-ftfy)
- Ability to check if the item's title exists in the citation
- Ability to check if an item has countries, but no matching regions (only
suggests missing regions if there is a region field in the CSV)
### Updated
- Python dependencies
### Fixed
- Regular expression to match all citation fields (dc.identifier.citation as
well as dcterms.bibliographicCitation) in `experimental.correct_language()`
- Regular expression to match dc.title and dcterms.title, but
ignore dc.title.alternative `check.duplicate_items()`
- Missing field name in `fix.newlines()` output
## [0.4.7] - 2021-03-17
### Changed
- Fixing invalid multi-value separators like `|` and `|||` is no longer class-
ified as "unsafe" as I have yet to see a case where this was intentional
- Not user visible, but now checks only print a warning to the screen instead
of returning a value and re-writing the DataFrame, which should be faster and
use less memory
### Added
- Configurable directory for AGROVOC requests cache (to allow running the web
version from Google App Engine where we can only write to /tmp)
- Ability to check for duplicate items in the data set (uses a combination of
the title, type, and date issued to determine uniqueness)
### Removed
- Checks for invalid and unnecessary multi-value separators because now I fix
them whenever I see them, so there is no need to have checks for them
### Updated
- Run `poetry update` to update project dependencies
## [0.4.6] - 2021-03-11
### Added
- Validation of dcterms.license field against SPDX license identifiers
### Changed
- Use DCTERMS fields where possible in `data/test.csv`
### Updated
- Run `poetry update` to update project dependencies
### Fixed
- Output for all fixes should be green, because it is good
## [0.4.5] - 2021-03-04
### Added
- Check dates in dcterms.issued field as well, not just fields that have the
word "date" in them
### Updated
- Run `poetry update` to update project dependencies
## [0.4.4] - 2021-02-21
### Added
- Accept dates formatted in ISO 8601 extended with combined date and time, for
example: 2020-08-31T11:04:56Z
- Colorized output: red for errors, yellow for warnings and information, green
for changes
### Updated
- Run `poetry update` to update project dependencies
## [0.4.3] - 2021-01-26
### Changed
- Reformat with black
- Requires Python 3.7+ for pandas 1.2.0
### Updated
- Run `poetry update`
- Expand check/fix for multi-value separators to include metadata with invalid
separators at the end, for example "Kenya||Tanzania||"
## [0.4.2] - 2020-07-06
### Changed
- Add field name to the output for more fixes and checks to help identify where

19
CITATION.cff Normal file
View File

@ -0,0 +1,19 @@
cff-version: "1.1.0"
abstract: "A simple but opinionated metadata quality checker and fixer designed to work with CSVs in the DSpace ecosystem."
authors:
-
affiliation: "International Livestock Research Institute"
family-names: Orth
given-names: "Alan S."
orcid: "https://orcid.org/0000-0002-1735-7458"
date-released: 2019-07-26
doi: "10568/110997"
keywords:
- dspace
- "dublin-core"
- csv
- metadata
license: "GPL-3.0-only"
message: "If you use this software, please cite it using these metadata."
repository-code: "https://github.com/ilri/csv-metadata-quality"
title: "DSpace CSV Metadata Quality Checker"

1
MANIFEST.in Normal file
View File

@ -0,0 +1 @@
include csv_metadata_quality/data/licenses.json

View File

@ -1,7 +1,18 @@
# CSV Metadata Quality [![Build Status](https://travis-ci.org/ilri/csv-metadata-quality.svg?branch=master)](https://travis-ci.org/ilri/csv-metadata-quality) [![builds.sr.ht status](https://builds.sr.ht/~alanorth/csv-metadata-quality.svg)](https://builds.sr.ht/~alanorth/csv-metadata-quality?)
A simple, but opinionated metadata quality checker and fixer designed to work with CSVs in the DSpace ecosystem (though it could theoretically work on any CSV that uses Dublin Core fields as columns). The implementation is essentially a pipeline of checks and fixes that begins with splitting multi-value fields on the standard DSpace "||" separator, trimming leading/trailing whitespace, and then proceeding to more specialized cases like ISSNs, ISBNs, languages, etc.
<h1 align="center">DSpace CSV Metadata Quality Checker</h1>
Requires Python 3.8 or greater. CSV and Excel support comes from the [Pandas](https://pandas.pydata.org/) library, though your mileage may vary with Excel because this is much less tested.
<p align="center">
<a href="https://ci.mjanja.ch/alanorth/csv-metadata-quality"><img alt="Build Status" src="https://ci.mjanja.ch/api/badges/alanorth/csv-metadata-quality/status.svg"></a>
<a href="https://github.com/ilri/csv-metadata-quality/actions"><img alt="Build and Test" src="https://github.com/ilri/csv-metadata-quality/workflows/Build%20and%20Test/badge.svg"></a>
<a href="https://github.com/psf/black"><img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-000000.svg"></a>
</p>
A simple, but opinionated metadata quality checker and fixer designed to work with CSVs in the DSpace ecosystem (though it could theoretically work on any CSV that uses Dublin Core fields as columns). The implementation is essentially a pipeline of checks and fixes that begins with splitting multi-value fields on the standard DSpace "||" separator, trimming leading/trailing whitespace, and then proceeding to more specialized cases like ISSNs, ISBNs, languages, unnecessary Unicode, AGROVOC terms, etc.
Requires Python 3.9 or greater. CSV support comes from the [Pandas](https://pandas.pydata.org/) library.
If you use the DSpace CSV metadata quality checker please cite:
*Orth, A. 2019. DSpace CSV metadata quality checker. Nairobi, Kenya: ILRI. https://hdl.handle.net/10568/110997.*
## Functionality
@ -9,13 +20,18 @@ Requires Python 3.8 or greater. CSV and Excel support comes from the [Pandas](ht
- Validate languages against ISO 639-1 (alpha2) and ISO 639-3 (alpha3)
- Experimental validation of titles and abstracts against item's Dublin Core language field
- Validate subjects against the AGROVOC REST API (see the `--agrovoc-fields` option)
- Validation of licenses against the list of [SPDX license identifiers](https://spdx.org/licenses)
- Fix leading, trailing, and excessive (ie, more than one) whitespace
- Fix invalid multi-value separators (`|`) using `--unsafe-fixes`
- Fix invalid and unnecessary multi-value separators (`|`)
- Fix problematic newlines (line feeds) using `--unsafe-fixes`
- Perform [Unicode normalization](https://withblue.ink/2019/03/11/why-you-need-to-normalize-unicode-strings.html) on strings using `--unsafe-fixes`
- Remove unnecessary Unicode like [non-breaking spaces](https://en.wikipedia.org/wiki/Non-breaking_space), [replacement characters](https://en.wikipedia.org/wiki/Specials_(Unicode_block)#Replacement_character), etc
- Check for "suspicious" characters that indicate encoding or copy/paste issues, for example "foreˆt" should be "forêt"
- Check for "mojibake" characters (and attempt to fix with `--unsafe-fixes`)
- Check for countries with missing regions (and attempt to fix with `--unsafe-fixes`)
- Remove duplicate metadata values
- Perform [Unicode normalization](https://withblue.ink/2019/03/11/why-you-need-to-normalize-unicode-strings.html) on strings using `--unsafe-fixes`
- Check for duplicate items, using the title, type, and date issued as an indicator
- [Normalize DOIs](https://www.crossref.org/documentation/member-setup/constructing-your-dois/) to https://doi.org URI format
## Installation
The easiest way to install CSV Metadata Quality is with [poetry](https://python-poetry.org):
@ -50,11 +66,13 @@ To validate and clean a CSV file you must specify input and output files using t
$ csv-metadata-quality -i data/test.csv -o /tmp/test.csv
```
## Unsafe Fixes
You can enable several "unsafe" fixes with the `--unsafe-fixes` option. Currently this will attempt to fix invalid multi-value separators and remove newlines.
## Invalid Multi-Value Separators
While it is *theoretically* possible for a single `|` character to be used legitimately in a metadata value, in my experience it is always a typo. For example, if a user mistakenly writes `Kenya|Tanzania` when attempting to indicate two countries, the result will be one metadata value with the literal text `Kenya|Tanzania`. This utility will correct the invalid multi-value separator so that there are two metadata values, ie `Kenya||Tanzania`.
### Invalid Multi-Value Separators
This is considered "unsafe" because it is *theoretically* possible for a single `|` character to be used legitimately in a metadata value, though in my experience it is always a typo. For example, if a user mistakenly writes `Kenya|Tanzania` when attempting to indicate two countries, the result will be one metadata value with the literal text `Kenya|Tanzania`. The `--unsafe-fixes` option will correct the invalid multi-value separator so that there are two metadata values, ie `Kenya||Tanzania`.
This will also remove unnecessary trailing multi-value separators, for example `Kenya||Tanzania||`.
## Unsafe Fixes
You can enable several "unsafe" fixes with the `--unsafe-fixes` option. Currently this will remove newlines, perform Unicode normalization, attempt to fix "mojibake" characters, and add missing UN M.49 regions.
### Newlines
This is considered "unsafe" because some systems give special importance to vertical space and render it properly. DSpace does not support rendering newlines in its XMLUI and has, at times, suffered from parsing errors that cause the import process to fail if an input file had newlines. The `--unsafe-fixes` option strips Unix line feeds (U+000A).
@ -67,6 +85,17 @@ This is considered "unsafe" because some systems give special importance to vert
Read more about [Unicode normalization](https://withblue.ink/2019/03/11/why-you-need-to-normalize-unicode-strings.html).
### Encoding Issues aka "Mojibake"
[Mojibake](https://en.wikipedia.org/wiki/Mojibake) is a phenomenon that occurs when text is decoded using an unintended character encoding. This usually presents itself in the form of strange, garbled characters in the text. Enabling "unsafe" fixes will attempt to correct these, for example:
- CIAT PublicaçaoCIAT Publicaçao
- CIAT PublicaciónCIAT Publicación
Pay special attention to the output of the script as well as the resulting file to make sure no new issues have been introduced. The ideal way to solve these issues is to avoid it in the first place. See [this guide about opening CSVs in UTF-8 format in Excel](https://www.itg.ias.edu/content/how-import-csv-file-uses-utf-8-character-encoding-0).
### Countries With Missing Regions
When an input file has both country and region columns we can check to see if the ISO 3166 country names have matching UN M.49 regions and add them when they are missing.
## AGROVOC Validation
You can enable validation of metadata values in certain fields against the AGROVOC REST API with the `--agrovoc-fields` option. For example, in addition to agricultural subjects, many countries and regions are also present AGROVOC. Enable this validation by specifying a comma-separated list of fields:
@ -97,11 +126,17 @@ This currently uses the [Python langid](https://github.com/saffsd/langid.py) lib
- Better logging, for example with INFO, WARN, and ERR levels
- Verbose, debug, or quiet options
- Warn if an author is shorter than 3 characters?
- Validate dc.rights field against SPDX? Perhaps with an option like `-m spdx` to enable the spdx module?
- Validate DOIs? Normalize to https://doi.org format? Or use just the DOI part: 10.1016/j.worlddev.2010.06.006
- Warn if two items use the same file in `filename` column
- Add an option to drop invalid AGROVOC subjects?
- Add tests for application invocation, ie `tests/test_app.py`?
- Validate ISSNs or journal titles against CrossRef API?
- Add configurable field validation, like specify a field name and a validation file?
- Perhaps like --validate=field.name,filename
- Add some row-based item sanity checks and fixes:
- Warn if item is Open Access, but missing a filename or URL
- Warn if item is Open Access, but missing a license
- Warn if item has an ISSN but no journal title
- Update journal titles from ISSN
- Migrate from Pandas to Polars
## License
This work is licensed under the [GPLv3](https://www.gnu.org/licenses/gpl-3.0.en.html).

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: GPL-3.0-only
from sys import argv
from csv_metadata_quality import app

View File

@ -1,9 +1,15 @@
# SPDX-License-Identifier: GPL-3.0-only
import argparse
import os
import re
import signal
import sys
from datetime import timedelta
import pandas as pd
import requests_cache
from colorama import Fore
import csv_metadata_quality.check as check
import csv_metadata_quality.experimental as experimental
@ -16,7 +22,13 @@ def parse_args(argv):
parser.add_argument(
"--agrovoc-fields",
"-a",
help="Comma-separated list of fields to validate against AGROVOC, for example: dc.subject,cg.coverage.country",
help="Comma-separated list of fields to validate against AGROVOC, for example: dcterms.subject,cg.coverage.country",
)
parser.add_argument(
"--drop-invalid-agrovoc",
"-d",
help="After validating metadata values against AGROVOC, drop invalid values.",
action="store_true",
)
parser.add_argument(
"--experimental-checks",
@ -27,7 +39,7 @@ def parse_args(argv):
parser.add_argument(
"--input-file",
"-i",
help="Path to input file. Can be UTF-8 CSV or Excel XLSX.",
help="Path to input file. Must be a UTF-8 CSV.",
required=True,
type=argparse.FileType("r", encoding="UTF-8"),
)
@ -47,7 +59,7 @@ def parse_args(argv):
parser.add_argument(
"--exclude-fields",
"-x",
help="Comma-separated list of fields to skip, for example: dc.contributor.author,dc.identifier.citation",
help="Comma-separated list of fields to skip, for example: dc.contributor.author,dcterms.bibliographicCitation",
)
args = parser.parse_args()
@ -65,33 +77,50 @@ def run(argv):
signal.signal(signal.SIGINT, signal_handler)
# Read all fields as strings so dates don't get converted from 1998 to 1998.0
df = pd.read_csv(args.input_file, dtype=str)
df = pd.read_csv(args.input_file, dtype_backend="pyarrow", dtype="str")
# Check if the user requested to skip any fields
if args.exclude_fields:
# Split the list of excluded fields on ',' into a list. Note that the
# user should be careful to no include spaces here.
exclude = args.exclude_fields.split(",")
else:
exclude = []
# enable transparent request cache with thirty days expiry
expire_after = timedelta(days=30)
# Allow overriding the location of the requests cache, just in case we are
# running in an environment where we can't write to the current working di-
# rectory (for example from csv-metadata-quality-web).
REQUESTS_CACHE_DIR = os.environ.get("REQUESTS_CACHE_DIR", ".")
requests_cache.install_cache(
f"{REQUESTS_CACHE_DIR}/agrovoc-response-cache", expire_after=expire_after
)
# prune old cache entries
requests_cache.delete()
for column in df.columns:
# Check if the user requested to skip any fields
if args.exclude_fields:
skip = False
# Split the list of excludes on ',' so we can test exact matches
# rather than fuzzy matches with regexes or "if word in string"
for exclude in args.exclude_fields.split(","):
if column == exclude and skip is False:
skip = True
if skip:
print(f"Skipping {column}")
if column in exclude:
print(f"{Fore.YELLOW}Skipping {Fore.RESET}{column}")
continue
continue
# Fix: whitespace
df[column] = df[column].apply(fix.whitespace, field_name=column)
# Fix: newlines
if args.unsafe_fixes:
df[column] = df[column].apply(fix.newlines)
# Skip whitespace and newline fixes on abstracts and descriptions
# because there are too many with legitimate multi-line metadata.
match = re.match(r"^.*?(abstract|description).*$", column)
if match is None:
# Fix: whitespace
df[column] = df[column].apply(fix.whitespace, field_name=column)
# Fix: newlines
df[column] = df[column].apply(fix.newlines, field_name=column)
# Fix: missing space after comma. Only run on author and citation
# fields for now, as this problem is mostly an issue in names.
if args.unsafe_fixes:
match = re.match(r"^.*?(author|citation).*$", column)
match = re.match(r"^.*?(author|[Cc]itation).*$", column)
if match is not None:
df[column] = df[column].apply(fix.comma_space, field_name=column)
@ -100,17 +129,28 @@ def run(argv):
if args.unsafe_fixes:
df[column] = df[column].apply(fix.normalize_unicode, field_name=column)
# Check: suspicious characters
df[column].apply(check.suspicious_characters, field_name=column)
# Fix: mojibake. If unsafe fixes are not enabled then we only check.
if args.unsafe_fixes:
df[column] = df[column].apply(fix.mojibake, field_name=column)
else:
df[column].apply(check.mojibake, field_name=column)
# Fix: unnecessary Unicode
df[column] = df[column].apply(fix.unnecessary_unicode)
# Check: invalid multi-value separator
df[column] = df[column].apply(check.separators, field_name=column)
# Fix: normalize DOIs
match = re.match(r"^.*?identifier\.doi.*$", column)
if match is not None:
df[column] = df[column].apply(fix.normalize_dois)
# Check: suspicious characters
df[column] = df[column].apply(check.suspicious_characters, field_name=column)
# Fix: invalid multi-value separator
if args.unsafe_fixes:
# Fix: invalid and unnecessary multi-value separators. Skip the title
# and abstract fields because "|" is used to indicate something like
# a subtitle.
match = re.match(r"^.*?(abstract|[Cc]itation|title).*$", column)
if match is None:
df[column] = df[column].apply(fix.separators, field_name=column)
# Run whitespace fix again after fixing invalid separators
df[column] = df[column].apply(fix.whitespace, field_name=column)
@ -118,36 +158,58 @@ def run(argv):
# Fix: duplicate metadata values
df[column] = df[column].apply(fix.duplicates, field_name=column)
# Check: invalid AGROVOC subject
# Check: invalid AGROVOC subject and optionally drop them
if args.agrovoc_fields:
# Identify fields the user wants to validate against AGROVOC
for field in args.agrovoc_fields.split(","):
if column == field:
df[column] = df[column].apply(check.agrovoc, field_name=column)
df[column] = df[column].apply(
check.agrovoc, field_name=column, drop=args.drop_invalid_agrovoc
)
# Check: invalid language
match = re.match(r"^.*?language.*$", column)
if match is not None:
df[column] = df[column].apply(check.language)
df[column].apply(check.language)
# Check: invalid ISSN
match = re.match(r"^.*?issn.*$", column)
if match is not None:
df[column] = df[column].apply(check.issn)
df[column].apply(check.issn)
# Check: invalid ISBN
match = re.match(r"^.*?isbn.*$", column)
if match is not None:
df[column] = df[column].apply(check.isbn)
df[column].apply(check.isbn)
# Check: invalid date
match = re.match(r"^.*?date.*$", column)
match = re.match(r"^.*?(date|dcterms\.issued).*$", column)
if match is not None:
df[column] = df[column].apply(check.date, field_name=column)
df[column].apply(check.date, field_name=column)
# Check: filename extension
if column == "filename":
df[column] = df[column].apply(check.filename_extension)
df[column].apply(check.filename_extension)
# Check: SPDX license identifier
match = re.match(r"dcterms\.license.*$", column)
if match is not None:
df[column].apply(check.spdx_license_identifier)
### End individual column checks ###
# Check: duplicate items
# We extract just the title, type, and date issued columns to analyze
try:
duplicates_df = df.filter(
regex=r"dcterms\.title|dc\.title|dcterms\.type|dc\.type|dcterms\.issued|dc\.date\.issued"
)
check.duplicate_items(duplicates_df)
# Delete the temporary duplicates DataFrame
del duplicates_df
except IndexError:
pass
##
# Perform some checks on rows so we can consider items as a whole rather
@ -160,15 +222,37 @@ def run(argv):
# column. For now it will have to do.
##
if args.experimental_checks:
# Transpose the DataFrame so we can consider each row as a column
df_transposed = df.T
# Transpose the DataFrame so we can consider each row as a column
df_transposed = df.T
for column in df_transposed.columns:
experimental.correct_language(df_transposed[column])
# Remember, here a "column" is an item (previously row). Perhaps I
# should rename column in this for loop...
for column in df_transposed.columns:
# Check: citation DOI
check.citation_doi(df_transposed[column], exclude)
# Check: title in citation
check.title_in_citation(df_transposed[column], exclude)
if args.unsafe_fixes:
# Fix: countries match regions
df_transposed[column] = fix.countries_match_regions(
df_transposed[column], exclude
)
else:
# Check: countries match regions
check.countries_match_regions(df_transposed[column], exclude)
if args.experimental_checks:
experimental.correct_language(df_transposed[column], exclude)
# Transpose the DataFrame back before writing. This is probably wasteful to
# do every time since we technically only need to do it if we've done the
# countries/regions fix above, but I can't think of another way for now.
df_transposed_back = df_transposed.T
# Write
df.to_csv(args.output_file, index=False)
df_transposed_back.to_csv(args.output_file, index=False)
# Close the input and output files before exiting
args.input_file.close()

View File

@ -1,4 +1,18 @@
# SPDX-License-Identifier: GPL-3.0-only
import logging
import re
from datetime import datetime
import country_converter as coco
import pandas as pd
import requests
from colorama import Fore
from pycountry import languages
from stdnum import isbn as stdnum_isbn
from stdnum import issn as stdnum_issn
from csv_metadata_quality.util import is_mojibake, load_spdx_licenses
def issn(field):
@ -11,19 +25,16 @@ def issn(field):
See: https://arthurdejong.org/python-stdnum/doc/1.11/index.html#stdnum.module.is_valid
"""
from stdnum import issn
# Skip fields with missing values
if pd.isna(field):
return
# Try to split multi-value field on "||" separator
for value in field.split("||"):
if not stdnum_issn.is_valid(value):
print(f"{Fore.RED}Invalid ISSN: {Fore.RESET}{value}")
if not issn.is_valid(value):
print(f"Invalid ISSN: {value}")
return field
return
def isbn(field):
@ -36,43 +47,16 @@ def isbn(field):
See: https://arthurdejong.org/python-stdnum/doc/1.11/index.html#stdnum.module.is_valid
"""
from stdnum import isbn
# Skip fields with missing values
if pd.isna(field):
return
# Try to split multi-value field on "||" separator
for value in field.split("||"):
if not stdnum_isbn.is_valid(value):
print(f"{Fore.RED}Invalid ISBN: {Fore.RESET}{value}")
if not isbn.is_valid(value):
print(f"Invalid ISBN: {value}")
return field
def separators(field, field_name):
"""Check for invalid multi-value separators (ie "|" or "|||").
Prints the field with the invalid multi-value separator.
"""
import re
# Skip fields with missing values
if pd.isna(field):
return
# Try to split multi-value field on "||" separator
for value in field.split("||"):
# After splitting, see if there are any remaining "|" characters
match = re.findall(r"^.*?\|.*$", value)
if match:
print(f"Invalid multi-value separator ({field_name}): {field}")
return field
return
def date(field, field_name):
@ -85,10 +69,9 @@ def date(field, field_name):
Prints the date if invalid.
"""
from datetime import datetime
if pd.isna(field):
print(f"Missing date ({field_name}).")
print(f"{Fore.RED}Missing date ({field_name}).{Fore.RESET}")
return
@ -97,15 +80,17 @@ def date(field, field_name):
# We don't allow multi-value date fields
if len(multiple_dates) > 1:
print(f"Multiple dates not allowed ({field_name}): {field}")
print(
f"{Fore.RED}Multiple dates not allowed ({field_name}): {Fore.RESET}{field}"
)
return field
return
try:
# Check if date is valid YYYY format
datetime.strptime(field, "%Y")
return field
return
except ValueError:
pass
@ -113,7 +98,7 @@ def date(field, field_name):
# Check if date is valid YYYY-MM format
datetime.strptime(field, "%Y-%m")
return field
return
except ValueError:
pass
@ -121,11 +106,19 @@ def date(field, field_name):
# Check if date is valid YYYY-MM-DD format
datetime.strptime(field, "%Y-%m-%d")
return field
return
except ValueError:
print(f"Invalid date ({field_name}): {field}")
pass
return field
try:
# Check if date is valid YYYY-MM-DDTHH:MM:SSZ format
datetime.strptime(field, "%Y-%m-%dT%H:%M:%SZ")
return
except ValueError:
print(f"{Fore.RED}Invalid date ({field_name}): {Fore.RESET}{field}")
return
def suspicious_characters(field, field_name):
@ -140,7 +133,7 @@ def suspicious_characters(field, field_name):
return
# List of suspicious characters, for example: ́ˆ~`
suspicious_characters = ["\u00B4", "\u02C6", "\u007E", "\u0060"]
suspicious_characters = ["\u00b4", "\u02c6", "\u007e", "\u0060"]
for character in suspicious_characters:
# Find the position of the suspicious character in the string
@ -156,12 +149,10 @@ def suspicious_characters(field, field_name):
# character and spanning enough of the rest to give a preview,
# but not too much to cause the line to break in terminals with
# a default of 80 characters width.
suspicious_character_msg = (
f"Suspicious character ({field_name}): {field_subset}"
)
suspicious_character_msg = f"{Fore.YELLOW}Suspicious character ({field_name}): {Fore.RESET}{field_subset}"
print(f"{suspicious_character_msg:1.80}")
return field
return
def language(field):
@ -170,8 +161,6 @@ def language(field):
Prints the value if it is invalid.
"""
from pycountry import languages
# Skip fields with missing values
if pd.isna(field):
return
@ -180,26 +169,21 @@ def language(field):
# Try to split multi-value field on "||" separator
for value in field.split("||"):
# After splitting, check if language value is 2 or 3 characters so we
# can check it against ISO 639-1 or ISO 639-3 accordingly.
if len(value) == 2:
if not languages.get(alpha_2=value):
print(f"Invalid ISO 639-1 language: {value}")
pass
print(f"{Fore.RED}Invalid ISO 639-1 language: {Fore.RESET}{value}")
elif len(value) == 3:
if not languages.get(alpha_3=value):
print(f"Invalid ISO 639-3 language: {value}")
pass
print(f"{Fore.RED}Invalid ISO 639-3 language: {Fore.RESET}{value}")
else:
print(f"Invalid language: {value}")
print(f"{Fore.RED}Invalid language: {Fore.RESET}{value}")
return field
return
def agrovoc(field, field_name):
def agrovoc(field, field_name, drop):
"""Check subject terms against AGROVOC REST API.
Function constructor expects the field as well as the field name because
@ -213,26 +197,16 @@ def agrovoc(field, field_name):
Prints a warning if the value is invalid.
"""
from datetime import timedelta
import requests
import requests_cache
# Skip fields with missing values
if pd.isna(field):
return
# enable transparent request cache with thirty days expiry
expire_after = timedelta(days=30)
requests_cache.install_cache(
"agrovoc-response-cache", expire_after=expire_after
)
# prune old cache entries
requests_cache.core.remove_expired_responses()
# Initialize an empty list to hold the validated AGROVOC values
values = []
# Try to split multi-value field on "||" separator
for value in field.split("||"):
request_url = "http://agrovoc.uniroma2.it/agrovoc/rest/v1/agrovoc/search"
request_url = "https://agrovoc.uniroma2.it/agrovoc/rest/v1/agrovoc/search"
request_params = {"query": value}
request = requests.get(request_url, params=request_params)
@ -242,9 +216,25 @@ def agrovoc(field, field_name):
# check if there are any results
if len(data["results"]) == 0:
print(f"Invalid AGROVOC ({field_name}): {value}")
if drop:
print(
f"{Fore.GREEN}Dropping invalid AGROVOC ({field_name}): {Fore.RESET}{value}"
)
else:
print(
f"{Fore.RED}Invalid AGROVOC ({field_name}): {Fore.RESET}{value}"
)
return field
# value is invalid AGROVOC, but we are not dropping
values.append(value)
else:
# value is valid AGROVOC so save it
values.append(value)
# Create a new field consisting of all values joined with "||"
new_field = "||".join(values)
return new_field
def filename_extension(field):
@ -258,8 +248,6 @@ def filename_extension(field):
than .pdf, .xls(x), .doc(x), ppt(x), case insensitive).
"""
import re
# Skip fields with missing values
if pd.isna(field):
return
@ -280,6 +268,11 @@ def filename_extension(field):
# Iterate over all values
for value in values:
# Strip filename descriptions that are meant for SAF Bundler, for
# example: Annual_Report_2020.pdf__description:Report
if "__description" in value:
value = value.split("__")[0]
# Assume filename extension does not match
filename_extension_match = False
@ -295,6 +288,273 @@ def filename_extension(field):
break
if filename_extension_match is False:
print(f"Filename with uncommon extension: {value}")
print(f"{Fore.YELLOW}Filename with uncommon extension: {Fore.RESET}{value}")
return field
return
def spdx_license_identifier(field):
"""Check if a license is a valid SPDX identifier.
Prints the value if it is invalid.
"""
# List of common non-SPDX licenses to ignore
# See: https://ilri.github.io/cgspace-submission-guidelines/dcterms-license/dcterms-license.txt
ignore_licenses = {
"All rights reserved; no re-use allowed",
"All rights reserved; self-archive copy only",
"Copyrighted; Non-commercial educational use only",
"Copyrighted; Non-commercial use only",
"Copyrighted; all rights reserved",
"Other",
}
# Skip fields with missing values
if pd.isna(field) or field in ignore_licenses:
return
spdx_licenses = load_spdx_licenses()
# Try to split multi-value field on "||" separator
for value in field.split("||"):
if value not in spdx_licenses:
print(f"{Fore.YELLOW}Non-SPDX license identifier: {Fore.RESET}{value}")
return
def duplicate_items(df):
"""Attempt to identify duplicate items.
First we check the total number of titles and compare it with the number of
unique titles. If there are less unique titles than total titles we expand
the search by creating a key (of sorts) for each item that includes their
title, type, and date issued, and compare it with all the others. If there
are multiple occurrences of the same title, type, date string then it's a
very good indicator that the items are duplicates.
"""
# Extract the names of the title, type, and date issued columns so we can
# reference them later. First we filter columns by likely patterns, then
# we extract the name from the first item of the resulting object, ie:
#
# Index(['dcterms.title[en_US]'], dtype='object')
#
# But, we need to consider that dc.title.alternative might come before the
# main title in the CSV, so use a negative lookahead to eliminate that.
#
# See: https://regex101.com/r/elyXkW/1
title_column_name = df.filter(
regex=r"^(dc|dcterms)\.title(?!\.alternative).*$"
).columns[0]
type_column_name = df.filter(regex=r"^(dcterms\.type|dc\.type).*$").columns[0]
date_column_name = df.filter(
regex=r"^(dcterms\.issued|dc\.date\.accessioned).*$"
).columns[0]
items_count_total = df[title_column_name].count()
items_count_unique = df[title_column_name].nunique()
if items_count_unique < items_count_total:
# Create a list to hold our items while we check for duplicates
items = []
for index, row in df.iterrows():
item_title_type_date = f"{row[title_column_name]}{row[type_column_name]}{row[date_column_name]}"
if item_title_type_date in items:
print(
f"{Fore.YELLOW}Possible duplicate ({title_column_name}): {Fore.RESET}{row[title_column_name]}"
)
else:
items.append(item_title_type_date)
def mojibake(field, field_name):
"""Check for mojibake (text that was encoded in one encoding and decoded in
in another, perhaps multiple times). See util.py.
Prints the string if it contains suspected mojibake.
"""
# Skip fields with missing values
if pd.isna(field):
return
if is_mojibake(field):
print(
f"{Fore.YELLOW}Possible encoding issue ({field_name}): {Fore.RESET}{field}"
)
return
def citation_doi(row, exclude):
"""Check for the scenario where an item has a DOI listed in its citation,
but does not have a cg.identifier.doi field.
Function prints a warning if the DOI field is missing, but there is a DOI
in the citation.
"""
# Check if the user requested us to skip any DOI fields so we can
# just return before going any further.
for field in exclude:
match = re.match(r"^.*?doi.*$", field)
if match is not None:
return
# Initialize some variables at global scope so that we can set them in the
# loop scope below and still be able to access them afterwards.
citation = ""
# Iterate over the labels of the current row's values to check if a DOI
# exists. If not, then we extract the citation to see if there is a DOI
# listed there.
for label in row.axes[0]:
# Skip fields with missing values
if pd.isna(row[label]):
continue
# If a DOI field exists we don't need to check the citation
match = re.match(r"^.*?doi.*$", label)
if match is not None:
return
# Check if the current label is a citation field and make sure the user
# hasn't asked to skip it. If not, then set the citation.
match = re.match(r"^.*?[cC]itation.*$", label)
if match is not None and label not in exclude:
citation = row[label]
if citation != "":
# Check the citation for "doi: 10.1186/1743-422X-9-218"
doi_match1 = re.match(r"^.*?doi:\s.*$", citation)
# Check the citation for a DOI URL (doi.org, dx.doi.org, etc)
doi_match2 = re.match(r"^.*?doi\.org.*$", citation)
if doi_match1 is not None or doi_match2 is not None:
print(
f"{Fore.YELLOW}DOI in citation, but missing a DOI field: {Fore.RESET}{citation}"
)
return
def title_in_citation(row, exclude):
"""Check for the scenario where an item's title is missing from its cita-
tion. This could mean that it is missing entirely, or perhaps just exists
in a different format (whitespace, accents, etc).
Function prints a warning if the title does not appear in the citation.
"""
# Initialize some variables at global scope so that we can set them in the
# loop scope below and still be able to access them afterwards.
title = ""
citation = ""
# Iterate over the labels of the current row's values to get the names of
# the title and citation columns. Then we check if the title is present in
# the citation.
for label in row.axes[0]:
# Skip fields with missing values
if pd.isna(row[label]):
continue
# Find the name of the title column
match = re.match(r"^(dc|dcterms)\.title.*$", label)
if match is not None and label not in exclude:
title = row[label]
# Find the name of the citation column
match = re.match(r"^.*?[cC]itation.*$", label)
if match is not None and label not in exclude:
citation = row[label]
if citation != "":
if title not in citation:
print(f"{Fore.YELLOW}Title is not present in citation: {Fore.RESET}{title}")
return
def countries_match_regions(row, exclude):
"""Check for the scenario where an item has country coverage metadata, but
does not have the corresponding region metadata. For example, an item that
has country coverage "Kenya" should also have region "Eastern Africa" acc-
ording to the UN M.49 classification scheme.
See: https://unstats.un.org/unsd/methodology/m49/
Function prints a warning if the appropriate region is not present.
"""
# Initialize some variables at global scope so that we can set them in the
# loop scope below and still be able to access them afterwards.
country_column_name = ""
region_column_name = ""
title_column_name = ""
# Instantiate a CountryConverter() object here. According to the docs it is
# more performant to do that as opposed to calling coco.convert() directly
# because we don't need to re-load the country data with each iteration.
cc = coco.CountryConverter()
# Set logging to ERROR so country_converter's convert() doesn't print the
# "not found in regex" warning message to the screen.
logging.basicConfig(level=logging.ERROR)
# Iterate over the labels of the current row's values to get the names of
# the title and citation columns. Then we check if the title is present in
# the citation.
for label in row.axes[0]:
# Find the name of the country column
match = re.match(r"^.*?country.*$", label)
if match is not None:
country_column_name = label
# Find the name of the region column, but make sure it's not subregion!
match = re.match(r"^.*?region.*$", label)
if match is not None and "sub" not in label:
region_column_name = label
# Find the name of the title column
match = re.match(r"^(dc|dcterms)\.title.*$", label)
if match is not None:
title_column_name = label
# Make sure the user has not asked to exclude any metadata fields. If so, we
# should return immediately.
column_names = [country_column_name, region_column_name, title_column_name]
if any(field in column_names for field in exclude):
return
# Make sure we found the country and region columns
if country_column_name != "" and region_column_name != "":
# If we don't have any countries then we should return early before
# suggesting regions.
if row[country_column_name] is not None:
countries = row[country_column_name].split("||")
else:
return
if row[region_column_name] is not None:
regions = row[region_column_name].split("||")
else:
regions = []
for country in countries:
# Look up the UN M.49 regions for this country code. CoCo seems to
# only list the direct region, ie Western Africa, rather than all
# the parent regions ("Sub-Saharan Africa", "Africa", "World")
un_region = cc.convert(names=country, to="UNRegion")
if un_region != "not found" and un_region not in regions:
try:
print(
f"{Fore.YELLOW}Missing region ({country} → {un_region}): {Fore.RESET}{row[title_column_name]}"
)
except KeyError:
print(
f"{Fore.YELLOW}Missing region ({country} → {un_region}): {Fore.RESET}<title field not present>"
)
return

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +1,14 @@
# SPDX-License-Identifier: GPL-3.0-only
import re
import pandas as pd
import py3langid as langid
from colorama import Fore
from pycountry import languages
def correct_language(row):
def correct_language(row, exclude):
"""Analyze the text used in the title, abstract, and citation fields to pre-
dict the language being used and compare it with the item's dc.language.iso
field.
@ -10,14 +17,10 @@ def correct_language(row):
language and returns the value in the language field if it does match.
"""
from pycountry import languages
import langid
import re
# Initialize some variables at global scope so that we can set them in the
# loop scope below and still be able to access them afterwards.
language = ""
sample_strings = list()
sample_strings = []
title = None
# Iterate over the labels of the current row's values. Before we transposed
@ -36,7 +39,8 @@ def correct_language(row):
language = row[label]
# Extract title if it is present
# Extract title if it is present (note that we don't allow excluding
# the title here because it complicates things).
match = re.match(r"^.*?title.*$", label)
if match is not None:
title = row[label]
@ -45,12 +49,12 @@ def correct_language(row):
# Extract abstract if it is present
match = re.match(r"^.*?abstract.*$", label)
if match is not None:
if match is not None and label not in exclude:
sample_strings.append(row[label])
# Extract citation if it is present
match = re.match(r"^.*?citation.*$", label)
if match is not None:
match = re.match(r"^.*?[cC]itation.*$", label)
if match is not None and label not in exclude:
sample_strings.append(row[label])
# Make sure language is not blank and is valid ISO 639-1/639-3 before proceeding with language prediction
@ -83,13 +87,13 @@ def correct_language(row):
detected_language = languages.get(alpha_2=langid_classification[0])
if len(language) == 2 and language != detected_language.alpha_2:
print(
f"Possibly incorrect language {language} (detected {detected_language.alpha_2}): {title}"
f"{Fore.YELLOW}Possibly incorrect language {language} (detected {detected_language.alpha_2}): {Fore.RESET}{title}"
)
elif len(language) == 3 and language != detected_language.alpha_3:
print(
f"Possibly incorrect language {language} (detected {detected_language.alpha_3}): {title}"
f"{Fore.YELLOW}Possibly incorrect language {language} (detected {detected_language.alpha_3}): {Fore.RESET}{title}"
)
else:
return language
return

View File

@ -1,6 +1,15 @@
import re
# SPDX-License-Identifier: GPL-3.0-only
import logging
import re
from unicodedata import normalize
import country_converter as coco
import pandas as pd
from colorama import Fore
from ftfy import TextFixerConfig, fix_text
from csv_metadata_quality.util import is_mojibake, is_nfc
def whitespace(field, field_name):
@ -14,7 +23,7 @@ def whitespace(field, field_name):
return
# Initialize an empty list to hold the cleaned values
values = list()
values = []
# Try to split multi-value field on "||" separator
for value in field.split("||"):
@ -26,7 +35,9 @@ def whitespace(field, field_name):
match = re.findall(pattern, value)
if match:
print(f"Removing excessive whitespace ({field_name}): {value}")
print(
f"{Fore.GREEN}Removing excessive whitespace ({field_name}): {Fore.RESET}{value}"
)
value = re.sub(pattern, " ", value)
# Save cleaned value
@ -39,23 +50,40 @@ def whitespace(field, field_name):
def separators(field, field_name):
"""Fix for invalid multi-value separators (ie "|")."""
"""Fix for invalid and unnecessary multi-value separators, for example:
value|value
value|||value
value||value||
Prints the field with the invalid multi-value separator.
"""
# Skip fields with missing values
if pd.isna(field):
return
# Initialize an empty list to hold the cleaned values
values = list()
values = []
# Try to split multi-value field on "||" separator
for value in field.split("||"):
# Check if the value is blank and skip it
if value == "":
print(
f"{Fore.GREEN}Fixing unnecessary multi-value separator ({field_name}): {Fore.RESET}{field}"
)
continue
# After splitting, see if there are any remaining "|" characters
pattern = re.compile(r"\|")
match = re.findall(pattern, value)
if match:
print(f"Fixing invalid multi-value separator ({field_name}): {value}")
print(
f"{Fore.GREEN}Fixing invalid multi-value separator ({field_name}): {Fore.RESET}{value}"
)
value = re.sub(pattern, "||", value)
@ -78,6 +106,7 @@ def unnecessary_unicode(field):
Replaces unnecessary Unicode characters like:
- Soft hyphen (U+00AD) hyphen
- No-break space (U+00A0) space
- Thin space (U+2009) space
Return string with characters removed or replaced.
"""
@ -91,7 +120,7 @@ def unnecessary_unicode(field):
match = re.findall(pattern, field)
if match:
print(f"Removing unnecessary Unicode (U+200B): {field}")
print(f"{Fore.GREEN}Removing unnecessary Unicode (U+200B): {Fore.RESET}{field}")
field = re.sub(pattern, "", field)
# Check for replacement characters (U+FFFD)
@ -99,7 +128,7 @@ def unnecessary_unicode(field):
match = re.findall(pattern, field)
if match:
print(f"Removing unnecessary Unicode (U+FFFD): {field}")
print(f"{Fore.GREEN}Removing unnecessary Unicode (U+FFFD): {Fore.RESET}{field}")
field = re.sub(pattern, "", field)
# Check for no-break spaces (U+00A0)
@ -107,7 +136,9 @@ def unnecessary_unicode(field):
match = re.findall(pattern, field)
if match:
print(f"Replacing unnecessary Unicode (U+00A0): {field}")
print(
f"{Fore.GREEN}Replacing unnecessary Unicode (U+00A0): {Fore.RESET}{field}"
)
field = re.sub(pattern, " ", field)
# Check for soft hyphens (U+00AD), sometimes preceeded with a normal hyphen
@ -115,9 +146,21 @@ def unnecessary_unicode(field):
match = re.findall(pattern, field)
if match:
print(f"Replacing unnecessary Unicode (U+00AD): {field}")
print(
f"{Fore.GREEN}Replacing unnecessary Unicode (U+00AD): {Fore.RESET}{field}"
)
field = re.sub(pattern, "-", field)
# Check for thin spaces (U+2009)
pattern = re.compile(r"\u2009")
match = re.findall(pattern, field)
if match:
print(
f"{Fore.GREEN}Replacing unnecessary Unicode (U+2009): {Fore.RESET}{field}"
)
field = re.sub(pattern, " ", field)
return field
@ -132,7 +175,7 @@ def duplicates(field, field_name):
values = field.split("||")
# Initialize an empty list to hold the de-duplicated values
new_values = list()
new_values = []
# Iterate over all values
for value in values:
@ -140,7 +183,9 @@ def duplicates(field, field_name):
if value not in new_values:
new_values.append(value)
else:
print(f"Removing duplicate value ({field_name}): {value}")
print(
f"{Fore.GREEN}Removing duplicate value ({field_name}): {Fore.RESET}{value}"
)
# Create a new field consisting of all values joined with "||"
new_field = "||".join(new_values)
@ -148,7 +193,7 @@ def duplicates(field, field_name):
return new_field
def newlines(field):
def newlines(field, field_name):
"""Fix newlines.
Single metadata values should not span multiple lines because this is not
@ -173,7 +218,7 @@ def newlines(field):
match = re.findall(r"\n", field)
if match:
print(f"Removing newline: {field}")
print(f"{Fore.GREEN}Removing newline ({field_name}): {Fore.RESET}{field}")
field = field.replace("\n", "")
return field
@ -197,7 +242,9 @@ def comma_space(field, field_name):
match = re.findall(r",\w", field)
if match:
print(f"Adding space after comma ({field_name}): {field}")
print(
f"{Fore.GREEN}Adding space after comma ({field_name}): {Fore.RESET}{field}"
)
field = re.sub(r",(\w)", r", \1", field)
return field
@ -212,16 +259,210 @@ def normalize_unicode(field, field_name):
Return normalized string.
"""
from csv_metadata_quality.util import is_nfc
from unicodedata import normalize
# Skip fields with missing values
if pd.isna(field):
return
# Check if the current string is using normalized Unicode (NFC)
if not is_nfc(field):
print(f"Normalizing Unicode ({field_name}): {field}")
print(f"{Fore.GREEN}Normalizing Unicode ({field_name}): {Fore.RESET}{field}")
field = normalize("NFC", field)
return field
def mojibake(field, field_name):
"""Attempts to fix mojibake (text that was encoded in one encoding and deco-
ded in another, perhaps multiple times). See util.py.
Return fixed string.
"""
# Skip fields with missing values
if pd.isna(field):
return field
# We don't want ftfy to change “smart quotes” to "ASCII quotes"
config = TextFixerConfig(uncurl_quotes=False)
if is_mojibake(field):
print(f"{Fore.GREEN}Fixing encoding issue ({field_name}): {Fore.RESET}{field}")
return fix_text(field, config)
else:
return field
def countries_match_regions(row, exclude):
"""Check for the scenario where an item has country coverage metadata, but
does not have the corresponding region metadata. For example, an item that
has country coverage "Kenya" should also have region "Eastern Africa" acc-
ording to the UN M.49 classification scheme.
See: https://unstats.un.org/unsd/methodology/m49/
Return fixed string.
"""
# Initialize some variables at global scope so that we can set them in the
# loop scope below and still be able to access them afterwards.
country_column_name = ""
region_column_name = ""
title_column_name = ""
# Instantiate a CountryConverter() object here. According to the docs it is
# more performant to do that as opposed to calling coco.convert() directly
# because we don't need to re-load the country data with each iteration.
cc = coco.CountryConverter()
# Set logging to ERROR so country_converter's convert() doesn't print the
# "not found in regex" warning message to the screen.
logging.basicConfig(level=logging.ERROR)
# Iterate over the labels of the current row's values to get the names of
# the title and citation columns. Then we check if the title is present in
# the citation.
for label in row.axes[0]:
# Find the name of the country column
match = re.match(r"^.*?country.*$", label)
if match is not None:
country_column_name = label
# Find the name of the region column, but make sure it's not subregion!
match = re.match(r"^.*?region.*$", label)
if match is not None and "sub" not in label:
region_column_name = label
# Find the name of the title column
match = re.match(r"^(dc|dcterms)\.title.*$", label)
if match is not None:
title_column_name = label
# Make sure the user has not asked to exclude any metadata fields. If so, we
# should return immediately.
column_names = [country_column_name, region_column_name, title_column_name]
if any(field in column_names for field in exclude):
return row
# Make sure we found the country and region columns
if country_column_name != "" and region_column_name != "":
# If we don't have any countries then we should return early before
# suggesting regions.
if row[country_column_name] is not None:
countries = row[country_column_name].split("||")
else:
return row
if row[region_column_name] is not None:
regions = row[region_column_name].split("||")
else:
regions = []
# An empty list for our regions so we can keep track for all countries
missing_regions = []
for country in countries:
# Look up the UN M.49 regions for this country code. CoCo seems to
# only list the direct region, ie Western Africa, rather than all
# the parent regions ("Sub-Saharan Africa", "Africa", "World")
un_region = cc.convert(names=country, to="UNRegion")
# Add the new un_region to regions if it is not "not found" and if
# it doesn't already exist in regions.
if un_region != "not found" and un_region not in regions:
if un_region not in missing_regions:
try:
print(
f"{Fore.YELLOW}Adding missing region ({un_region}): {Fore.RESET}{row[title_column_name]}"
)
except KeyError:
# If there is no title column in the CSV we will print
# the fix without the title instead of crashing.
print(
f"{Fore.YELLOW}Adding missing region ({un_region}): {Fore.RESET}<title field not present>"
)
missing_regions.append(un_region)
if len(missing_regions) > 0:
# Add the missing regions back to the row, paying attention to whether
# or not the row's region column is None (aka null) or just an empty
# string (length would be 0).
if row[region_column_name] is not None and len(row[region_column_name]) > 0:
row[region_column_name] = (
row[region_column_name] + "||" + "||".join(missing_regions)
)
else:
row[region_column_name] = "||".join(missing_regions)
return row
def normalize_dois(field):
"""Normalize DOIs.
DOIs are meant to be globally unique identifiers. They are case insensitive,
but in order to compare them robustly they should be normalized to a common
format:
- strip leading and trailing whitespace
- lowercase all ASCII characters
- convert all variations to https://doi.org/10.xxxx/xxxx URI format
Return string with normalized DOI.
See: https://www.crossref.org/documentation/member-setup/constructing-your-dois/
"""
# Skip fields with missing values
if pd.isna(field):
return
# Try to split multi-value field on "||" separator
values = field.split("||")
# Initialize an empty list to hold the de-duplicated values
new_values = []
# Iterate over all values (most items will only have one DOI)
for value in values:
# Strip leading and trailing whitespace
new_value = value.strip()
new_value = new_value.lower()
# Convert to HTTPS
pattern = re.compile(r"^http://")
match = re.findall(pattern, new_value)
if match:
new_value = re.sub(pattern, "https://", new_value)
# Convert dx.doi.org to doi.org
pattern = re.compile(r"dx\.doi\.org")
match = re.findall(pattern, new_value)
if match:
new_value = re.sub(pattern, "doi.org", new_value)
# Replace values like doi: 10.11648/j.jps.20140201.14
pattern = re.compile(r"^doi: 10\.")
match = re.findall(pattern, new_value)
if match:
new_value = re.sub(pattern, "https://doi.org/10.", new_value)
# Replace values like 10.3390/foods12010115
pattern = re.compile(r"^10\.")
match = re.findall(pattern, new_value)
if match:
new_value = re.sub(pattern, "https://doi.org/10.", new_value)
if new_value != value:
print(f"{Fore.GREEN}Normalized DOI: {Fore.RESET}{value}")
new_values.append(new_value)
new_field = "||".join(new_values)
return new_field

View File

@ -1,3 +1,12 @@
# SPDX-License-Identifier: GPL-3.0-only
import json
import os
from ftfy.badness import is_bad
def is_nfc(field):
"""Utility function to check whether a string is using normalized Unicode.
Python's built-in unicodedata library has the is_normalized() function, but
@ -12,3 +21,45 @@ def is_nfc(field):
from unicodedata import normalize
return field == normalize("NFC", field)
def is_mojibake(field):
"""Determines whether a string contains mojibake.
We commonly deal with CSV files that were *encoded* in UTF-8, but decoded
as something else like CP-1252 (Windows Latin). This manifests in the form
of "mojibake", for example:
- CIAT Publicaçao
- CIAT Publicación
This uses the excellent "fixes text for you" (ftfy) library to determine
whether a string contains characters that have been encoded in one encoding
and decoded in another.
Inspired by this code snippet from Martijn Pieters on StackOverflow:
https://stackoverflow.com/questions/29071995/identify-garbage-unicode-string-using-python
Return boolean.
"""
if not is_bad(field):
# Nothing weird, should be okay
return False
try:
field.encode("sloppy-windows-1252")
except UnicodeEncodeError:
# Not CP-1252 encodable, probably fine
return False
else:
# Encodable as CP-1252, Mojibake alert level high
return True
def load_spdx_licenses():
"""Returns a Python list of SPDX short license identifiers."""
with open(os.path.join(os.path.dirname(__file__), "data/licenses.json")) as f:
licenses = json.load(f)
# List comprehension to extract the license ID for each license
return [license["licenseId"] for license in licenses["licenses"]]

View File

@ -1 +1,3 @@
VERSION = "0.4.2"
# SPDX-License-Identifier: GPL-3.0-only
VERSION = "0.6.1"

17
data/abstract-check.csv Normal file
View File

@ -0,0 +1,17 @@
id,dc.title,dcterms.abstract
1,Normal item,This is an abstract
2,Leading whitespace, This is an abstract
3,Trailing whitespace,This is an abstract
4,Consecutive whitespace,This is an abstract
5,Newline,"This
is an abstract"
6,Newline with leading whitespace," This
is an abstract"
7,Newline with trailing whitespace,"This
is an abstract "
8,Newline with consecutive whitespace,"This
is an abstract"
9,Multiple newlines,"This
is
an
abstract"
1 id dc.title dcterms.abstract
2 1 Normal item This is an abstract
3 2 Leading whitespace This is an abstract
4 3 Trailing whitespace This is an abstract
5 4 Consecutive whitespace This is an abstract
6 5 Newline This is an abstract
7 6 Newline with leading whitespace This is an abstract
8 7 Newline with trailing whitespace This is an abstract
9 8 Newline with consecutive whitespace This is an abstract
10 9 Multiple newlines This is an abstract

13
data/test-geography.csv Normal file
View File

@ -0,0 +1,13 @@
dc.title,dcterms.issued,dcterms.type,dc.contributor.author,cg.coverage.country,cg.coverage.region
No country,2022-09-01,Report,"Orth, Alan",,
Matching country and region,2022-09-01,Report,"Orth, Alan",Kenya,Eastern Africa
Missing region,2022-09-01,Report,"Orth, Alan",Kenya,
Caribbean country with matching region,2022-09-01,Report,"Orth, Alan",Bahamas,Caribbean
Caribbean country with no region,2022-09-01,Report,"Orth, Alan",Bahamas,
Fake country with no region,2022-09-01,Report,"Orth, Alan",Yeah Baby,
SE Asian country with matching region,2022-09-01,Report,"Orth, Alan",Cambodia,South-eastern Asia
SE Asian country with no region,2022-09-01,Report,"Orth, Alan",Cambodia,
Duplicate countries with matching region,2022-09-01,Report,"Orth, Alan",Kenya||Kenya,Eastern Africa
Duplicate countries with missing regions,2022-09-01,Report,"Orth, Alan",Kenya||Kenya,
Multiple countries with no regions,2022-09-01,Report,"Orth, Alan",Kenya||Bahamas,
Multiple countries with mixed matching regions,2022-09-01,Report,"Orth, Alan",Kenya||Bahamas,Eastern Africa
1 dc.title dcterms.issued dcterms.type dc.contributor.author cg.coverage.country cg.coverage.region
2 No country 2022-09-01 Report Orth, Alan
3 Matching country and region 2022-09-01 Report Orth, Alan Kenya Eastern Africa
4 Missing region 2022-09-01 Report Orth, Alan Kenya
5 Caribbean country with matching region 2022-09-01 Report Orth, Alan Bahamas Caribbean
6 Caribbean country with no region 2022-09-01 Report Orth, Alan Bahamas
7 Fake country with no region 2022-09-01 Report Orth, Alan Yeah Baby
8 SE Asian country with matching region 2022-09-01 Report Orth, Alan Cambodia South-eastern Asia
9 SE Asian country with no region 2022-09-01 Report Orth, Alan Cambodia
10 Duplicate countries with matching region 2022-09-01 Report Orth, Alan Kenya||Kenya Eastern Africa
11 Duplicate countries with missing regions 2022-09-01 Report Orth, Alan Kenya||Kenya
12 Multiple countries with no regions 2022-09-01 Report Orth, Alan Kenya||Bahamas
13 Multiple countries with mixed matching regions 2022-09-01 Report Orth, Alan Kenya||Bahamas Eastern Africa

View File

@ -1,30 +1,42 @@
dc.title,dc.date.issued,dc.identifier.issn,dc.identifier.isbn,dc.language.iso,dc.subject,cg.coverage.country,filename
Leading space,2019-07-29,,,,,,
Trailing space ,2019-07-29,,,,,,
Excessive space,2019-07-29,,,,,,
Miscellaenous ||whitespace | issues ,2019-07-29,,,,,,
Duplicate||Duplicate,2019-07-29,,,,,,
Invalid ISSN,2019-07-29,2321-2302,,,,,
Invalid ISBN,2019-07-29,,978-0-306-40615-6,,,,
Multiple valid ISSNs,2019-07-29,0378-5955||0024-9319,,,,,
Multiple valid ISBNs,2019-07-29,,99921-58-10-7||978-0-306-40615-7,,,,
Invalid date,2019-07-260,,,,,,
Multiple dates,2019-07-26||2019-01-10,,,,,,
Invalid multi-value separator,2019-07-29,0378-5955|0024-9319,,,,,
Unnecessary Unicode,2019-07-29,,,,,,
Suspicious character||foreˆt,2019-07-29,,,,,,
Invalid ISO 639-1 (alpha 2) language,2019-07-29,,,jp,,,
Invalid ISO 639-3 (alpha 3) language,2019-07-29,,,chi,,,
Invalid language,2019-07-29,,,Span,,,
Invalid AGROVOC subject,2019-07-29,,,,FOREST,,
dc.title,dcterms.issued,dc.identifier.issn,dc.identifier.isbn,dcterms.language,dcterms.subject,cg.coverage.country,filename,dcterms.license,dcterms.type,dcterms.bibliographicCitation,cg.identifier.doi,cg.coverage.region,cg.coverage.subregion
Leading space,2019-07-29,,,,,,,,,,,,
Trailing space ,2019-07-29,,,,,,,,,,,,
Excessive space,2019-07-29,,,,,,,,,,,,
Miscellaenous ||whitespace | issues ,2019-07-29,,,,,,,,,,,,
Duplicate||Duplicate,2019-07-29,,,,,,,,,,,,
Invalid ISSN,2019-07-29,2321-2302,,,,,,,,,,,
Invalid ISBN,2019-07-29,,978-0-306-40615-6,,,,,,,,,,
Multiple valid ISSNs,2019-07-29,0378-5955||0024-9319,,,,,,,,,,,
Multiple valid ISBNs,2019-07-29,,99921-58-10-7||978-0-306-40615-7,,,,,,,,,,
Invalid date,2019-07-260,,,,,,,,,,,,
Multiple dates,2019-07-26||2019-01-10,,,,,,,,,,,,
Invalid multi-value separator,2019-07-29,0378-5955|0024-9319,,,,,,,,,,,
Unnecessary Unicode,2019-07-29,,,,,,,,,,,,
Suspicious character||foreˆt,2019-07-29,,,,,,,,,,,,
Invalid ISO 639-1 (alpha 2) language,2019-07-29,,,jp,,,,,,,,,
Invalid ISO 639-3 (alpha 3) language,2019-07-29,,,chi,,,,,,,,,
Invalid language,2019-07-29,,,Span,,,,,,,,,
Invalid AGROVOC subject,2019-07-29,,,,LIVESTOCK||FOREST,,,,,,,,
Newline (LF),2019-07-30,,,,"TANZA
NIA",,
Missing date,,,,,,,
Invalid country,2019-08-01,,,,,KENYAA,
Uncommon filename extension,2019-08-10,,,,,,file.pdf.lck
Unneccesary unicode (U+002D + U+00AD),2019-08-10,,978-­92-­9043-­823-­6,,,,
"Missing space,after comma",2019-08-27,,,,,,
Incorrect ISO 639-1 language,2019-09-26,,,es,,,
Incorrect ISO 639-3 language,2019-09-26,,,spa,,,
Composéd Unicode,2020-01-14,,,,,,
Decomposéd Unicode,2020-01-14,,,,,,
NIA",,,,,,,,
Missing date,,,,,,,,,,,,,
Invalid country,2019-08-01,,,,,KENYAA,,,,,,,
Uncommon filename extension,2019-08-10,,,,,,file.pdf.lck,,,,,,
Unneccesary unicode (U+002D + U+00AD),2019-08-10,,978-­92-­9043-­823-­6,,,,,,,,,,
"Missing space,after comma",2019-08-27,,,,,,,,,,,,
Incorrect ISO 639-1 language,2019-09-26,,,es,,,,,,,,,
Incorrect ISO 639-3 language,2019-09-26,,,spa,,,,,,,,,
Composéd Unicode,2020-01-14,,,,,,,,,,,,
Decomposéd Unicode,2020-01-14,,,,,,,,,,,,
Unnecessary multi-value separator,2021-01-03,0378-5955||,,,,,,,,,,,
Invalid SPDX license identifier,2021-03-11,,,,,,,CC-BY,,,,,
Duplicate Title,2021-03-17,,,,,,,,Report,,,,
Duplicate Title,2021-03-17,,,,,,,,Report,,,,
Mojibake,2021-03-18,,,,Publicaçao CIAT,,,,Report,,,,
"DOI in citation, but missing cg.identifier.doi",2021-10-06,,,,,,,,,"Orth, A. 2021. DOI in citation, but missing cg.identifier.doi. doi: 10.1186/1743-422X-9-218",,,
Title missing from citation,2021-12-05,,,,,,,,,"Orth, A. 2021. Title missing f rom citation.",,,
Country missing region,2021-12-08,,,,,Kenya,,,,,,,
Subregion field shouldnt trigger region checks,2022-12-07,,,,,Kenya,,,,,,Eastern Africa,Baringo
DOI with HTTP and dx.doi.org,2024-04-23,,,,,,,,,,http://dx.doi.org/10.1016/j.envc.2023.100794,,
DOI with colon,2024-04-23,,,,,,,,,,doi: 10.11648/j.jps.20140201.14,,
Upper case bare DOI,2024-04-23,,,,,,,,,,10.19103/AS.2018.0043.16,,

1 dc.title dc.date.issued dcterms.issued dc.identifier.issn dc.identifier.isbn dc.language.iso dcterms.language dc.subject dcterms.subject cg.coverage.country filename dcterms.license dcterms.type dcterms.bibliographicCitation cg.identifier.doi cg.coverage.region cg.coverage.subregion
2 Leading space 2019-07-29
3 Trailing space 2019-07-29
4 Excessive space 2019-07-29
5 Miscellaenous ||whitespace | issues 2019-07-29
6 Duplicate||Duplicate 2019-07-29
7 Invalid ISSN 2019-07-29 2321-2302
8 Invalid ISBN 2019-07-29 978-0-306-40615-6
9 Multiple valid ISSNs 2019-07-29 0378-5955||0024-9319
10 Multiple valid ISBNs 2019-07-29 99921-58-10-7||978-0-306-40615-7
11 Invalid date 2019-07-260
12 Multiple dates 2019-07-26||2019-01-10
13 Invalid multi-value separator 2019-07-29 0378-5955|0024-9319
14 Unnecessary Unicode​ 2019-07-29
15 Suspicious character||foreˆt 2019-07-29
16 Invalid ISO 639-1 (alpha 2) language 2019-07-29 jp
17 Invalid ISO 639-3 (alpha 3) language 2019-07-29 chi
18 Invalid language 2019-07-29 Span
19 Invalid AGROVOC subject 2019-07-29 FOREST LIVESTOCK||FOREST
20 Newline (LF) 2019-07-30 TANZA NIA
21 Missing date
22 Invalid country 2019-08-01 KENYAA
23 Uncommon filename extension 2019-08-10 file.pdf.lck
24 Unneccesary unicode (U+002D + U+00AD) 2019-08-10 978-­92-­9043-­823-­6
25 Missing space,after comma 2019-08-27
26 Incorrect ISO 639-1 language 2019-09-26 es
27 Incorrect ISO 639-3 language 2019-09-26 spa
28 Composéd Unicode 2020-01-14
29 Decomposéd Unicode 2020-01-14
30 Unnecessary multi-value separator 2021-01-03 0378-5955||
31 Invalid SPDX license identifier 2021-03-11 CC-BY
32 Duplicate Title 2021-03-17 Report
33 Duplicate Title 2021-03-17 Report
34 Mojibake 2021-03-18 Publicaçao CIAT Report
35 DOI in citation, but missing cg.identifier.doi 2021-10-06 Orth, A. 2021. DOI in citation, but missing cg.identifier.doi. doi: 10.1186/1743-422X-9-218
36 Title missing from citation 2021-12-05 Orth, A. 2021. Title missing f rom citation.
37 Country missing region 2021-12-08 Kenya
38 Subregion field shouldn’t trigger region checks 2022-12-07 Kenya Eastern Africa Baringo
39 DOI with HTTP and dx.doi.org 2024-04-23 http://dx.doi.org/10.1016/j.envc.2023.100794
40 DOI with colon 2024-04-23 doi: 10.11648/j.jps.20140201.14
41 Upper case bare DOI 2024-04-23 10.19103/AS.2018.0043.16
42

2643
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,31 +1,41 @@
[tool.poetry]
name = "csv-metadata-quality"
version = "0.4.2"
version = "0.6.1"
description="A simple, but opinionated CSV quality checking and fixing pipeline for CSVs in the DSpace ecosystem."
authors = ["Alan Orth <alan.orth@gmail.com>"]
license="GPL-3.0-only"
repository = "https://github.com/ilri/csv-metadata-quality"
homepage = "https://github.com/ilri/csv-metadata-quality"
[tool.poetry.dependencies]
python = "^3.8"
pandas = "^1.0.4"
python-stdnum = "^1.13"
xlrd = "^1.2.0"
requests = "^2.23.0"
requests-cache = "^0.5.2"
pycountry = "^19.8.18"
langid = "^1.1.6"
[tool.poetry.scripts]
csv-metadata-quality = 'csv_metadata_quality.__main__:main'
[tool.poetry.dev-dependencies]
pytest = "^5.4.2"
ipython = "^7.15.0"
flake8 = "^3.8.2"
pytest-clarity = "^0.3.0-alpha.0"
black = "^19.10b0"
isort = "^4.3.21"
csvkit = "^1.0.5"
[tool.poetry.dependencies]
python = "^3.9"
pandas = {version = "^2.0.2", extras = ["feather", "performance"]}
python-stdnum = "^1.18"
requests = "^2.28.2"
requests-cache = "^1.0.0"
colorama = "^0.4.6"
ftfy = "^6.1.1"
country-converter = "~1.1.0"
pycountry = "^23.12.7"
py3langid = "^0.2.2"
[tool.poetry.group.dev.dependencies]
pytest = "^7.2.1"
flake8 = "^7.0.0"
pytest-clarity = "^1.0.1"
black = "^23.1.0"
isort = "^5.12.0"
csvkit = "^1.1.0"
ipython = "^8.10.0"
fixit = "^2.1.0"
[build-system]
requires = ["poetry>=0.12"]
build-backend = "poetry.masonry.api"
[tool.isort]
profile = "black"
line_length=88

View File

@ -1,5 +1,5 @@
[pytest]
addopts= -rsxX -s -v --strict --capture=sys
addopts= -rsxX -s -v --strict-markers --capture=sys
filterwarnings =
error::UserWarning
ignore:.*U.* is deprecated:DeprecationWarning

9
renovate.json Normal file
View File

@ -0,0 +1,9 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:base"
],
"pip_requirements": {
"enabled": false
}
}

View File

@ -1,300 +1,82 @@
agate==1.6.1 \
--hash=sha256:48d6f80b35611c1ba25a642cbc5b90fcbdeeb2a54711c4a8d062ee2809334d1c \
--hash=sha256:c93aaa500b439d71e4a5cf088d0006d2ce2c76f1950960c8843114e5f361dfd3
agate-dbf==0.2.1 \
--hash=sha256:00c93c498ec9a04cc587bf63dd7340e67e2541f0df4c9a7259d7cb3dd4ce372f \
--hash=sha256:f618fadb413d41468c90d72fca945681d82d9e4d1b3d89f9bda52e607b828c0b
agate-excel==0.2.3 \
--hash=sha256:8f255ef2c87c436b7132049e1dd86c8e08bf82d8c773aea86f3069b461a17d52
agate-sql==0.5.4 \
--hash=sha256:9277490ba8b8e7c747a9ae3671f52fe486784b48d4a14e78ca197fb0e36f281b
appdirs==1.4.4 \
--hash=sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128 \
--hash=sha256:7d5d0167b2b1ba821647616af46a749d1c653740dd0d2415100fe26e27afdf41
appnope==0.1.0; sys_platform == "darwin" \
--hash=sha256:5b26757dc6f79a3b7dc9fab95359328d5747fcb2409d331ea66d0272b90ab2a0 \
--hash=sha256:8b995ffe925347a2138d7ac0fe77155e4311a0ea6d6da4f5128fe4b3cbe5ed71
atomicwrites==1.4.0; sys_platform == "win32" \
--hash=sha256:6d1784dea7c0c8d4a5172b6c620f40b6e4cbfdf96d783691f2e1302a7b88e197 \
--hash=sha256:ae70396ad1a434f9c7046fd2dd196fc04b12f9e91ffb859164193be8b6168a7a
attrs==19.3.0 \
--hash=sha256:08a96c641c3a74e44eb59afb61a24f2cb9f4d7188748e76ba4bb5edfa3cb7d1c \
--hash=sha256:f7b7ce16570fe9965acd6d30101a28f62fb4a7f9e926b3bbc9b61f8b04247e72
babel==2.8.0 \
--hash=sha256:d670ea0b10f8b723672d3a6abeb87b565b244da220d76b4dba1b66269ec152d4 \
--hash=sha256:1aac2ae2d0d8ea368fa90906567f5c08463d98ade155c0c4bfedd6a0f7160e38
backcall==0.2.0 \
--hash=sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255 \
--hash=sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e
black==19.10b0 \
--hash=sha256:1b30e59be925fafc1ee4565e5e08abef6b03fe455102883820fe5ee2e4734e0b \
--hash=sha256:c2edb73a08e9e0e6f65a0e6af18b059b8b1cdd5bef997d7a0b181df93dc81539
certifi==2020.6.20 \
--hash=sha256:8fc0819f1f30ba15bdb34cceffb9ef04d99f420f68eb75d901e9560b8749fc41 \
--hash=sha256:5930595817496dd21bb8dc35dad090f1c2cd0adfaf21204bf6732ca5d8ee34d3
chardet==3.0.4 \
--hash=sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691 \
--hash=sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae
click==7.1.2 \
--hash=sha256:dacca89f4bfadd5de3d7489b7c8a566eee0d3676333fbb50030263894c38c0dc \
--hash=sha256:d2b5255c7c6349bc1bd1e59e08cd12acbbd63ce649f2588755783aa94dfb6b1a
colorama==0.4.3; sys_platform == "win32" \
--hash=sha256:7d73d2a99753107a36ac6b455ee49046802e59d9d076ef8e47b61499fa29afff \
--hash=sha256:e96da0d330793e2cb9485e9ddfd918d456036c7149416295932478192f4436a1
csvkit==1.0.5 \
--hash=sha256:7bd390f4d300e45dc9ed67a32af762a916bae7d9a85087a10fd4f64ce65fd5b9
dbfread==2.0.7 \
--hash=sha256:f604def58c59694fa0160d7be5d0b8d594467278d2bb6a47d46daf7162c84cec \
--hash=sha256:07c8a9af06ffad3f6f03e8fe91ad7d2733e31a26d2b72c4dd4cfbae07ee3b73d
decorator==4.4.2 \
--hash=sha256:41fa54c2a0cc4ba648be4fd43cff00aedf5b9465c9bf18d64325bc225f08f760 \
--hash=sha256:e3a62f0520172440ca0dcc823749319382e377f37f140a0b99ef45fecb84bfe7
et-xmlfile==1.0.1 \
--hash=sha256:614d9722d572f6246302c4491846d2c393c199cfa4edc9af593437691683335b
flake8==3.8.3 \
--hash=sha256:15e351d19611c887e482fb960eae4d44845013cc142d42896e9862f775d8cf5c \
--hash=sha256:f04b9fcbac03b0a3e58c0ab3a0ecc462e023a9faf046d57794184028123aa208
idna==2.10 \
--hash=sha256:b97d804b1e9b523befed77c48dacec60e6dcb0b5391d57af6a65a312a90648c0 \
--hash=sha256:b307872f855b18632ce0c21c5e45be78c0ea7ae4c15c828c20788b26921eb3f6
ipython==7.16.1 \
--hash=sha256:2dbcc8c27ca7d3cfe4fcdff7f45b27f9a8d3edfa70ff8024a71c7a8eb5f09d64 \
--hash=sha256:9f4fcb31d3b2c533333893b9172264e4821c1ac91839500f31bd43f2c59b3ccf
ipython-genutils==0.2.0 \
--hash=sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8 \
--hash=sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8
isodate==0.6.0 \
--hash=sha256:aa4d33c06640f5352aca96e4b81afd8ab3b47337cc12089822d6f322ac772c81 \
--hash=sha256:2e364a3d5759479cdb2d37cce6b9376ea504db2ff90252a2e5b7cc89cc9ff2d8
isort==4.3.21 \
--hash=sha256:6e811fcb295968434526407adb8796944f1988c5b65e8139058f2014cbe100fd \
--hash=sha256:54da7e92468955c4fceacd0c86bd0ec997b0e1ee80d97f67c35a78b719dccab1
jdcal==1.4.1 \
--hash=sha256:1abf1305fce18b4e8aa248cf8fe0c56ce2032392bc64bbd61b5dff2a19ec8bba \
--hash=sha256:472872e096eb8df219c23f2689fc336668bdb43d194094b5cc1707e1640acfc8
jedi==0.17.1 \
--hash=sha256:1ddb0ec78059e8e27ec9eb5098360b4ea0a3dd840bedf21415ea820c21b40a22 \
--hash=sha256:807d5d4f96711a2bcfdd5dfa3b1ae6d09aa53832b182090b222b5efb81f52f63
langid==1.1.6 \
--hash=sha256:044bcae1912dab85c33d8e98f2811b8f4ff1213e5e9a9e9510137b84da2cb293
leather==0.3.3 \
--hash=sha256:e0bb36a6d5f59fbf3c1a6e75e7c8bee29e67f06f5b48c0134407dde612eba5e2 \
--hash=sha256:076d1603b5281488285718ce1a5ce78cf1027fe1e76adf9c548caf83c519b988
mccabe==0.6.1 \
--hash=sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42 \
--hash=sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f
more-itertools==8.4.0 \
--hash=sha256:68c70cc7167bdf5c7c9d8f6954a7837089c6a36bf565383919bb595efb8a17e5 \
--hash=sha256:b78134b2063dd214000685165d81c154522c3ee0a1c0d4d113c80361c234c5a2
numpy==1.19.0 \
--hash=sha256:63d971bb211ad3ca37b2adecdd5365f40f3b741a455beecba70fd0dde8b2a4cb \
--hash=sha256:b6aaeadf1e4866ca0fdf7bb4eed25e521ae21a7947c59f78154b24fc7abbe1dd \
--hash=sha256:13af0184177469192d80db9bd02619f6fa8b922f9f327e077d6f2a6acb1ce1c0 \
--hash=sha256:356f96c9fbec59974a592452ab6a036cd6f180822a60b529a975c9467fcd5f23 \
--hash=sha256:fa1fe75b4a9e18b66ae7f0b122543c42debcf800aaafa0212aaff3ad273c2596 \
--hash=sha256:cbe326f6d364375a8e5a8ccb7e9cd73f4b2f6dc3b2ed205633a0db8243e2a96a \
--hash=sha256:a2e3a39f43f0ce95204beb8fe0831199542ccab1e0c6e486a0b4947256215632 \
--hash=sha256:7b852817800eb02e109ae4a9cef2beda8dd50d98b76b6cfb7b5c0099d27b52d4 \
--hash=sha256:d97a86937cf9970453c3b62abb55a6475f173347b4cde7f8dcdb48c8e1b9952d \
--hash=sha256:a86c962e211f37edd61d6e11bb4df7eddc4a519a38a856e20a6498c319efa6b0 \
--hash=sha256:d34fbb98ad0d6b563b95de852a284074514331e6b9da0a9fc894fb1cdae7a79e \
--hash=sha256:658624a11f6e1c252b2cd170d94bf28c8f9410acab9f2fd4369e11e1cd4e1aaf \
--hash=sha256:4d054f013a1983551254e2379385e359884e5af105e3efe00418977d02f634a7 \
--hash=sha256:26a45798ca2a4e168d00de75d4a524abf5907949231512f372b217ede3429e98 \
--hash=sha256:3c40c827d36c6d1c3cf413694d7dc843d50997ebffbc7c87d888a203ed6403a7 \
--hash=sha256:be62aeff8f2f054eff7725f502f6228298891fd648dc2630e03e44bf63e8cee0 \
--hash=sha256:dd53d7c4a69e766e4900f29db5872f5824a06827d594427cf1a4aa542818b796 \
--hash=sha256:30a59fb41bb6b8c465ab50d60a1b298d1cd7b85274e71f38af5a75d6c475d2d2 \
--hash=sha256:df1889701e2dfd8ba4dc9b1a010f0a60950077fb5242bb92c8b5c7f1a6f2668a \
--hash=sha256:33c623ef9ca5e19e05991f127c1be5aeb1ab5cdf30cb1c5cf3960752e58b599b \
--hash=sha256:26f509450db547e4dfa3ec739419b31edad646d21fb8d0ed0734188b35ff6b27 \
--hash=sha256:7b57f26e5e6ee2f14f960db46bd58ffdca25ca06dd997729b1b179fddd35f5a3 \
--hash=sha256:a8705c5073fe3fcc297fb8e0b31aa794e05af6a329e81b7ca4ffecab7f2b95ef \
--hash=sha256:c2edbb783c841e36ca0fa159f0ae97a88ce8137fb3a6cd82eae77349ba4b607b \
--hash=sha256:8cde829f14bd38f6da7b2954be0f2837043e8b8d7a9110ec5e318ae6bf706610 \
--hash=sha256:76766cc80d6128750075378d3bb7812cf146415bd29b588616f72c943c00d598
openpyxl==3.0.4 \
--hash=sha256:6e62f058d19b09b95d20ebfbfb04857ad08d0833190516c1660675f699c6186f \
--hash=sha256:d88dd1480668019684c66cfff3e52a5de4ed41e9df5dd52e008cbf27af0dbf87
packaging==20.4 \
--hash=sha256:998416ba6962ae7fbd6596850b80e17859a5753ba17c32284f67bfff33784181 \
--hash=sha256:4357f74f47b9c12db93624a82154e9b120fa8293699949152b22065d556079f8
pandas==1.0.5 \
--hash=sha256:faa42a78d1350b02a7d2f0dbe3c80791cf785663d6997891549d0f86dc49125e \
--hash=sha256:9c31d52f1a7dd2bb4681d9f62646c7aa554f19e8e9addc17e8b1b20011d7522d \
--hash=sha256:8778a5cc5a8437a561e3276b85367412e10ae9fff07db1eed986e427d9a674f8 \
--hash=sha256:9871ef5ee17f388f1cb35f76dc6106d40cb8165c562d573470672f4cdefa59ef \
--hash=sha256:35b670b0abcfed7cad76f2834041dcf7ae47fd9b22b63622d67cdc933d79f453 \
--hash=sha256:c9410ce8a3dee77653bc0684cfa1535a7f9c291663bd7ad79e39f5ab58f67ab3 \
--hash=sha256:02f1e8f71cd994ed7fcb9a35b6ddddeb4314822a0e09a9c5b2d278f8cb5d4096 \
--hash=sha256:b3c4f93fcb6e97d993bf87cdd917883b7dab7d20c627699f360a8fb49e9e0b91 \
--hash=sha256:5759edf0b686b6f25a5d4a447ea588983a33afc8a0081a0954184a4a87fd0dd7 \
--hash=sha256:ab8173a8efe5418bbe50e43f321994ac6673afc5c7c4839014cf6401bbdd0705 \
--hash=sha256:13f75fb18486759da3ff40f5345d9dd20e7d78f2a39c5884d013456cec9876f0 \
--hash=sha256:5a7cf6044467c1356b2b49ef69e50bf4d231e773c3ca0558807cdba56b76820b \
--hash=sha256:ae961f1f0e270f1e4e2273f6a539b2ea33248e0e3a11ffb479d757918a5e03a9 \
--hash=sha256:f69e0f7b7c09f1f612b1f8f59e2df72faa8a6b41c5a436dde5b615aaf948f107 \
--hash=sha256:4c73f373b0800eb3062ffd13d4a7a2a6d522792fa6eb204d67a4fad0a40f03dc \
--hash=sha256:69c5d920a0b2a9838e677f78f4dde506b95ea8e4d30da25859db6469ded84fa8
parsedatetime==2.6 \
--hash=sha256:cb96edd7016872f58479e35879294258c71437195760746faffedb692aef000b \
--hash=sha256:4cb368fbb18a0b7231f4d76119165451c8d2e35951455dfee97c62a87b04d455
parso==0.7.0 \
--hash=sha256:158c140fc04112dc45bca311633ae5033c2c2a7b732fa33d0955bad8152a8dd0 \
--hash=sha256:908e9fae2144a076d72ae4e25539143d40b8e3eafbaeae03c1bfe226f4cdf12c
pathspec==0.8.0 \
--hash=sha256:7d91249d21749788d07a2d0f94147accd8f845507400749ea19c1ec9054a12b0 \
--hash=sha256:da45173eb3a6f2a5a487efba21f050af2b41948be6ab52b6a1e3ff22bb8b7061
pexpect==4.8.0; sys_platform != "win32" \
--hash=sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937 \
--hash=sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c
pickleshare==0.7.5 \
--hash=sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56 \
--hash=sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca
pluggy==0.13.1 \
--hash=sha256:966c145cd83c96502c3c3868f50408687b38434af77734af1e9ca461a4081d2d \
--hash=sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0
prompt-toolkit==3.0.5 \
--hash=sha256:df7e9e63aea609b1da3a65641ceaf5bc7d05e0a04de5bd45d05dbeffbabf9e04 \
--hash=sha256:563d1a4140b63ff9dd587bda9557cffb2fe73650205ab6f4383092fb882e7dc8
ptyprocess==0.6.0; sys_platform != "win32" \
--hash=sha256:d7cc528d76e76342423ca640335bd3633420dc1366f258cb31d05e865ef5ca1f \
--hash=sha256:923f299cc5ad920c68f2bc0bc98b75b9f838b93b599941a6b63ddbc2476394c0
py==1.9.0 \
--hash=sha256:366389d1db726cd2fcfc79732e75410e5fe4d31db13692115529d34069a043c2 \
--hash=sha256:9ca6883ce56b4e8da7e79ac18787889fa5206c79dcc67fb065376cd2fe03f342
pycodestyle==2.6.0 \
--hash=sha256:2295e7b2f6b5bd100585ebcb1f616591b652db8a741695b3d8f5d28bdc934367 \
--hash=sha256:c58a7d2815e0e8d7972bf1803331fb0152f867bd89adf8a01dfd55085434192e
pycountry==19.8.18 \
--hash=sha256:3c57aa40adcf293d59bebaffbe60d8c39976fba78d846a018dc0c2ec9c6cb3cb
pyflakes==2.2.0 \
--hash=sha256:0d94e0e05a19e57a99444b6ddcf9a6eb2e5c68d3ca1e98e90707af8152c90a92 \
--hash=sha256:35b2d75ee967ea93b55750aa9edbbf72813e06a66ba54438df2cfac9e3c27fc8
pygments==2.6.1 \
--hash=sha256:ff7a40b4860b727ab48fad6360eb351cc1b33cbf9b15a0f689ca5353e9463324 \
--hash=sha256:647344a061c249a3b74e230c739f434d7ea4d8b1d5f3721bc0f3558049b38f44
pyparsing==2.4.7 \
--hash=sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b \
--hash=sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1
pytest==5.4.3 \
--hash=sha256:5c0db86b698e8f170ba4582a492248919255fcd4c79b1ee64ace34301fb589a1 \
--hash=sha256:7979331bfcba207414f5e1263b5a0f8f521d0f457318836a7355531ed1a4c7d8
pytest-clarity==0.3.0a0 \
--hash=sha256:5cc99e3d9b7969dfe17e5f6072d45a917c59d363b679686d3c958a1ded2e4dcf
python-dateutil==2.8.1 \
--hash=sha256:73ebfe9dbf22e832286dafa60473e4cd239f8592f699aa5adaf10050e6e1823c \
--hash=sha256:75bb3f31ea686f1197762692a9ee6a7550b59fc6ca3a1f4b5d7e32fb98e2da2a
python-slugify==4.0.1 \
--hash=sha256:69a517766e00c1268e5bbfc0d010a0a8508de0b18d30ad5a1ff357f8ae724270
python-stdnum==1.13 \
--hash=sha256:120f83d33fb8b8be1b282f20dd755a892d5facf84f54fa21f75bbd2633128160 \
--hash=sha256:3d5d4430579cba88211d3ba4855a16faff235352a25a01d6ab70024686a75823
pytimeparse==1.1.8 \
--hash=sha256:04b7be6cc8bd9f5647a6325444926c3ac34ee6bc7e69da4367ba282f076036bd \
--hash=sha256:e86136477be924d7e670646a98561957e8ca7308d44841e21f5ddea757556a0a
pytz==2020.1 \
--hash=sha256:a494d53b6d39c3c6e44c3bec237336e14305e4f29bbf800b599253057fbb79ed \
--hash=sha256:c35965d010ce31b23eeb663ed3cc8c906275d6be1a34393a1d73a41febf4a048
regex==2020.6.8 \
--hash=sha256:fbff901c54c22425a5b809b914a3bfaf4b9570eee0e5ce8186ac71eb2025191c \
--hash=sha256:112e34adf95e45158c597feea65d06a8124898bdeac975c9087fe71b572bd938 \
--hash=sha256:92d8a043a4241a710c1cf7593f5577fbb832cf6c3a00ff3fc1ff2052aff5dd89 \
--hash=sha256:bae83f2a56ab30d5353b47f9b2a33e4aac4de9401fb582b55c42b132a8ac3868 \
--hash=sha256:b2ba0f78b3ef375114856cbdaa30559914d081c416b431f2437f83ce4f8b7f2f \
--hash=sha256:95fa7726d073c87141f7bbfb04c284901f8328e2d430eeb71b8ffdd5742a5ded \
--hash=sha256:e3cdc9423808f7e1bb9c2e0bdb1c9dc37b0607b30d646ff6faf0d4e41ee8fee3 \
--hash=sha256:c78e66a922de1c95a208e4ec02e2e5cf0bb83a36ceececc10a72841e53fbf2bd \
--hash=sha256:08997a37b221a3e27d68ffb601e45abfb0093d39ee770e4257bd2f5115e8cb0a \
--hash=sha256:2f6f211633ee8d3f7706953e9d3edc7ce63a1d6aad0be5dcee1ece127eea13ae \
--hash=sha256:55b4c25cbb3b29f8d5e63aeed27b49fa0f8476b0d4e1b3171d85db891938cc3a \
--hash=sha256:89cda1a5d3e33ec9e231ece7307afc101b5217523d55ef4dc7fb2abd6de71ba3 \
--hash=sha256:690f858d9a94d903cf5cada62ce069b5d93b313d7d05456dbcd99420856562d9 \
--hash=sha256:1700419d8a18c26ff396b3b06ace315b5f2a6e780dad387e4c48717a12a22c29 \
--hash=sha256:654cb773b2792e50151f0e22be0f2b6e1c3a04c5328ff1d9d59c0398d37ef610 \
--hash=sha256:52e1b4bef02f4040b2fd547357a170fc1146e60ab310cdbdd098db86e929b387 \
--hash=sha256:cf59bbf282b627130f5ba68b7fa3abdb96372b24b66bdf72a4920e8153fc7910 \
--hash=sha256:5aaa5928b039ae440d775acea11d01e42ff26e1561c0ffcd3d805750973c6baf \
--hash=sha256:97712e0d0af05febd8ab63d2ef0ab2d0cd9deddf4476f7aa153f76feef4b2754 \
--hash=sha256:6ad8663c17db4c5ef438141f99e291c4d4edfeaacc0ce28b5bba2b0bf273d9b5 \
--hash=sha256:e9b64e609d37438f7d6e68c2546d2cb8062f3adb27e6336bc129b51be20773ac
requests==2.24.0 \
--hash=sha256:fe75cc94a9443b9246fc7049224f75604b113c36acb93f87b80ed42c44cbb898 \
--hash=sha256:b3559a131db72c33ee969480840fff4bb6dd111de7dd27c8ee1f820f4f00231b
requests-cache==0.5.2 \
--hash=sha256:813023269686045f8e01e2289cc1e7e9ae5ab22ddd1e2849a9093ab3ab7270eb \
--hash=sha256:81e13559baee64677a7d73b85498a5a8f0639e204517b5d05ff378e44a57831a
six==1.15.0 \
--hash=sha256:8b74bedcbbbaca38ff6d7491d76f2b06b3592611af620f8426e82dddb04a5ced \
--hash=sha256:30639c035cdb23534cd4aa2dd52c3bf48f06e5f4a941509c8bafd8ce11080259
sqlalchemy==1.3.18 \
--hash=sha256:f11c2437fb5f812d020932119ba02d9e2bc29a6eca01a055233a8b449e3e1e7d \
--hash=sha256:0ec575db1b54909750332c2e335c2bb11257883914a03bc5a3306a4488ecc772 \
--hash=sha256:f57be5673e12763dd400fea568608700a63ce1c6bd5bdbc3cc3a2c5fdb045274 \
--hash=sha256:8cac7bb373a5f1423e28de3fd5fc8063b9c8ffe8957dc1b1a59cb90453db6da1 \
--hash=sha256:adad60eea2c4c2a1875eb6305a0b6e61a83163f8e233586a4d6a55221ef984fe \
--hash=sha256:57aa843b783179ab72e863512e14bdcba186641daf69e4e3a5761d705dcc35b1 \
--hash=sha256:621f58cd921cd71ba6215c42954ffaa8a918eecd8c535d97befa1a8acad986dd \
--hash=sha256:fc728ece3d5c772c196fd338a99798e7efac7a04f9cb6416299a3638ee9a94cd \
--hash=sha256:736d41cfebedecc6f159fc4ac0769dc89528a989471dc1d378ba07d29a60ba1c \
--hash=sha256:427273b08efc16a85aa2b39892817e78e3ed074fcb89b2a51c4979bae7e7ba98 \
--hash=sha256:cbe1324ef52ff26ccde2cb84b8593c8bf930069dfc06c1e616f1bfd4e47f48a3 \
--hash=sha256:8fd452dc3d49b3cc54483e033de6c006c304432e6f84b74d7b2c68afa2569ae5 \
--hash=sha256:e89e0d9e106f8a9180a4ca92a6adde60c58b1b0299e1b43bd5e0312f535fbf33 \
--hash=sha256:6ac2558631a81b85e7fb7a44e5035347938b0a73f5fdc27a8566777d0792a6a4 \
--hash=sha256:87fad64529cde4f1914a5b9c383628e1a8f9e3930304c09cf22c2ae118a1280e \
--hash=sha256:e4624d7edb2576cd72bb83636cd71c8ce544d8e272f308bd80885056972ca299 \
--hash=sha256:89494df7f93b1836cae210c42864b292f9b31eeabca4810193761990dc689cce \
--hash=sha256:716754d0b5490bdcf68e1e4925edc02ac07209883314ad01a137642ddb2056f1 \
--hash=sha256:50c4ee32f0e1581828843267d8de35c3298e86ceecd5e9017dc45788be70a864 \
--hash=sha256:d98bc827a1293ae767c8f2f18be3bb5151fd37ddcd7da2a5f9581baeeb7a3fa1 \
--hash=sha256:0942a3a0df3f6131580eddd26d99071b48cfe5aaf3eab2783076fbc5a1c1882e \
--hash=sha256:16593fd748944726540cd20f7e83afec816c2ac96b082e26ae226e8f7e9688cf \
--hash=sha256:c26f95e7609b821b5f08a72dab929baa0d685406b953efd7c89423a511d5c413 \
--hash=sha256:512a85c3c8c3995cc91af3e90f38f460da5d3cade8dc3a229c8e0879037547c9 \
--hash=sha256:d05c4adae06bd0c7f696ae3ec8d993ed8ffcc4e11a76b1b35a5af8a099bd2284 \
--hash=sha256:109581ccc8915001e8037b73c29590e78ce74be49ca0a3630a23831f9e3ed6c7 \
--hash=sha256:8619b86cb68b185a778635be5b3e6018623c0761dde4df2f112896424aa27bd8 \
--hash=sha256:da2fb75f64792c1fc64c82313a00c728a7c301efe6a60b7a9fe35b16b4368ce7
termcolor==1.1.0 \
--hash=sha256:1d6d69ce66211143803fbc56652b41d73b4a400a2891d7bf7a1cdf4c02de613b
text-unidecode==1.3 \
--hash=sha256:bad6603bb14d279193107714b288be206cac565dfa49aa5b105294dd5c4aab93 \
--hash=sha256:1311f10e8b895935241623731c2ba64f4c455287888b18189350b67134a822e8
toml==0.10.1 \
--hash=sha256:bda89d5935c2eac546d648028b9901107a595863cb36bae0c73ac804a9b4ce88 \
--hash=sha256:926b612be1e5ce0634a2ca03470f95169cf16f939018233a670519cb4ac58b0f
traitlets==4.3.3 \
--hash=sha256:70b4c6a1d9019d7b4f6846832288f86998aa3b9207c6821f3578a6a6a467fe44 \
--hash=sha256:d023ee369ddd2763310e4c3eae1ff649689440d4ae59d7485eb4cfbbe3e359f7
typed-ast==1.4.1 \
--hash=sha256:73d785a950fc82dd2a25897d525d003f6378d1cb23ab305578394694202a58c3 \
--hash=sha256:aaee9905aee35ba5905cfb3c62f3e83b3bec7b39413f0a7f19be4e547ea01ebb \
--hash=sha256:0c2c07682d61a629b68433afb159376e24e5b2fd4641d35424e462169c0a7919 \
--hash=sha256:4083861b0aa07990b619bd7ddc365eb7fa4b817e99cf5f8d9cf21a42780f6e01 \
--hash=sha256:269151951236b0f9a6f04015a9004084a5ab0d5f19b57de779f908621e7d8b75 \
--hash=sha256:24995c843eb0ad11a4527b026b4dde3da70e1f2d8806c99b7b4a7cf491612652 \
--hash=sha256:fe460b922ec15dd205595c9b5b99e2f056fd98ae8f9f56b888e7a17dc2b757e7 \
--hash=sha256:4e3e5da80ccbebfff202a67bf900d081906c358ccc3d5e3c8aea42fdfdfd51c1 \
--hash=sha256:249862707802d40f7f29f6e1aad8d84b5aa9e44552d2cc17384b209f091276aa \
--hash=sha256:8ce678dbaf790dbdb3eba24056d5364fb45944f33553dd5869b7580cdbb83614 \
--hash=sha256:c9e348e02e4d2b4a8b2eedb48210430658df6951fa484e59de33ff773fbd4b41 \
--hash=sha256:bcd3b13b56ea479b3650b82cabd6b5343a625b0ced5429e4ccad28a8973f301b \
--hash=sha256:d5d33e9e7af3b34a40dc05f498939f0ebf187f07c385fd58d591c533ad8562fe \
--hash=sha256:0666aa36131496aed8f7be0410ff974562ab7eeac11ef351def9ea6fa28f6355 \
--hash=sha256:d205b1b46085271b4e15f670058ce182bd1199e56b317bf2ec004b6a44f911f6 \
--hash=sha256:6daac9731f172c2a22ade6ed0c00197ee7cc1221aa84cfdf9c31defeb059a907 \
--hash=sha256:498b0f36cc7054c1fead3d7fc59d2150f4d5c6c56ba7fb150c013fbc683a8d2d \
--hash=sha256:715ff2f2df46121071622063fc7543d9b1fd19ebfc4f5c8895af64a77a8c852c \
--hash=sha256:fc0fea399acb12edbf8a628ba8d2312f583bdbdb3335635db062fa98cf71fca4 \
--hash=sha256:d43943ef777f9a1c42bf4e552ba23ac77a6351de620aa9acf64ad54933ad4d34 \
--hash=sha256:8c8aaad94455178e3187ab22c8b01a3837f8ee50e09cf31f1ba129eb293ec30b
urllib3==1.25.9 \
--hash=sha256:88206b0eb87e6d677d424843ac5209e3fb9d0190d0ee169599165ec25e9d9115 \
--hash=sha256:3018294ebefce6572a474f0604c2021e33b3fd8006ecd11d62107a5d2a963527
wcwidth==0.2.5 \
--hash=sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784 \
--hash=sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83
xlrd==1.2.0 \
--hash=sha256:e551fb498759fa3a5384a94ccd4c3c02eb7c00ea424426e212ac0c57be9dfbde \
--hash=sha256:546eb36cee8db40c3eaa46c351e67ffee6eeb5fa2650b71bc4c758a29a1b29b2
agate-dbf==0.2.2 ; python_version >= "3.9" and python_version < "4.0"
agate-excel==0.2.5 ; python_version >= "3.9" and python_version < "4.0"
agate-sql==0.5.9 ; python_version >= "3.9" and python_version < "4.0"
agate==1.7.1 ; python_version >= "3.9" and python_version < "4.0"
appdirs==1.4.4 ; python_version >= "3.9" and python_version < "4.0"
appnope==0.1.3 ; python_version >= "3.9" and python_version < "4.0" and sys_platform == "darwin"
asttokens==2.2.1 ; python_version >= "3.9" and python_version < "4.0"
attrs==23.1.0 ; python_version >= "3.9" and python_version < "4.0"
babel==2.12.1 ; python_version >= "3.9" and python_version < "4.0"
backcall==0.2.0 ; python_version >= "3.9" and python_version < "4.0"
black==23.3.0 ; python_version >= "3.9" and python_version < "4.0"
cattrs==22.2.0 ; python_version >= "3.9" and python_version < "4.0"
certifi==2022.12.7 ; python_version >= "3.9" and python_version < "4.0"
charset-normalizer==3.1.0 ; python_version >= "3.9" and python_version < "4.0"
click==8.1.3 ; python_version >= "3.9" and python_version < "4.0"
colorama==0.4.6 ; python_version >= "3.9" and python_version < "4.0"
country-converter==1.0.0 ; python_version >= "3.9" and python_version < "4.0"
csvkit==1.1.1 ; python_version >= "3.9" and python_version < "4.0"
dbfread==2.0.7 ; python_version >= "3.9" and python_version < "4.0"
decorator==5.1.1 ; python_version >= "3.9" and python_version < "4.0"
et-xmlfile==1.1.0 ; python_version >= "3.9" and python_version < "4.0"
exceptiongroup==1.1.1 ; python_version >= "3.9" and python_version < "3.11"
executing==1.2.0 ; python_version >= "3.9" and python_version < "4.0"
flake8==6.0.0 ; python_version >= "3.9" and python_version < "4.0"
ftfy==6.1.1 ; python_version >= "3.9" and python_version < "4"
greenlet==2.0.2 ; python_version >= "3.9" and platform_machine == "aarch64" and python_version < "4.0" or python_version >= "3.9" and platform_machine == "ppc64le" and python_version < "4.0" or python_version >= "3.9" and platform_machine == "x86_64" and python_version < "4.0" or python_version >= "3.9" and platform_machine == "amd64" and python_version < "4.0" or python_version >= "3.9" and platform_machine == "AMD64" and python_version < "4.0" or python_version >= "3.9" and platform_machine == "win32" and python_version < "4.0" or python_version >= "3.9" and platform_machine == "WIN32" and python_version < "4.0"
idna==3.4 ; python_version >= "3.9" and python_version < "4.0"
iniconfig==2.0.0 ; python_version >= "3.9" and python_version < "4.0"
ipython==8.13.1 ; python_version >= "3.9" and python_version < "4.0"
isodate==0.6.1 ; python_version >= "3.9" and python_version < "4.0"
isort==5.12.0 ; python_version >= "3.9" and python_version < "4.0"
jedi==0.18.2 ; python_version >= "3.9" and python_version < "4.0"
langid==1.1.6 ; python_version >= "3.9" and python_version < "4.0"
leather==0.3.4 ; python_version >= "3.9" and python_version < "4.0"
markdown-it-py==2.2.0 ; python_version >= "3.9" and python_version < "4.0"
matplotlib-inline==0.1.6 ; python_version >= "3.9" and python_version < "4.0"
mccabe==0.7.0 ; python_version >= "3.9" and python_version < "4.0"
mdurl==0.1.2 ; python_version >= "3.9" and python_version < "4.0"
mypy-extensions==1.0.0 ; python_version >= "3.9" and python_version < "4.0"
numpy==1.24.3 ; python_version >= "3.9" and python_version < "4.0"
olefile==0.46 ; python_version >= "3.9" and python_version < "4.0"
openpyxl==3.1.2 ; python_version >= "3.9" and python_version < "4.0"
packaging==23.1 ; python_version >= "3.9" and python_version < "4.0"
pandas==2.0.1 ; python_version >= "3.9" and python_version < "4.0"
parsedatetime==2.6 ; python_version >= "3.9" and python_version < "4.0"
parso==0.8.3 ; python_version >= "3.9" and python_version < "4.0"
pathspec==0.11.1 ; python_version >= "3.9" and python_version < "4.0"
pexpect==4.8.0 ; python_version >= "3.9" and python_version < "4.0" and sys_platform != "win32"
pickleshare==0.7.5 ; python_version >= "3.9" and python_version < "4.0"
platformdirs==3.5.0 ; python_version >= "3.9" and python_version < "4.0"
pluggy==1.0.0 ; python_version >= "3.9" and python_version < "4.0"
pprintpp==0.4.0 ; python_version >= "3.9" and python_version < "4.0"
prompt-toolkit==3.0.38 ; python_version >= "3.9" and python_version < "4.0"
ptyprocess==0.7.0 ; python_version >= "3.9" and python_version < "4.0" and sys_platform != "win32"
pure-eval==0.2.2 ; python_version >= "3.9" and python_version < "4.0"
pyarrow==11.0.0 ; python_version >= "3.9" and python_version < "4.0"
pycodestyle==2.10.0 ; python_version >= "3.9" and python_version < "4.0"
pycountry @ git+https://github.com/alanorth/pycountry@iso-codes-4.13.0 ; python_version >= "3.9" and python_version < "4.0"
pyflakes==3.0.1 ; python_version >= "3.9" and python_version < "4.0"
pygments==2.15.1 ; python_version >= "3.9" and python_version < "4.0"
pytest-clarity==1.0.1 ; python_version >= "3.9" and python_version < "4.0"
pytest==7.3.1 ; python_version >= "3.9" and python_version < "4.0"
python-dateutil==2.8.2 ; python_version >= "3.9" and python_version < "4.0"
python-slugify==8.0.1 ; python_version >= "3.9" and python_version < "4.0"
python-stdnum==1.18 ; python_version >= "3.9" and python_version < "4.0"
pytimeparse==1.1.8 ; python_version >= "3.9" and python_version < "4.0"
pytz==2023.3 ; python_version >= "3.9" and python_version < "4.0"
requests-cache==0.9.8 ; python_version >= "3.9" and python_version < "4.0"
requests==2.29.0 ; python_version >= "3.9" and python_version < "4.0"
rich==13.3.5 ; python_version >= "3.9" and python_version < "4.0"
six==1.16.0 ; python_version >= "3.9" and python_version < "4.0"
sqlalchemy==1.4.48 ; python_version >= "3.9" and python_version < "4.0"
stack-data==0.6.2 ; python_version >= "3.9" and python_version < "4.0"
text-unidecode==1.3 ; python_version >= "3.9" and python_version < "4.0"
tomli==2.0.1 ; python_version >= "3.9" and python_version < "3.11"
traitlets==5.9.0 ; python_version >= "3.9" and python_version < "4.0"
typing-extensions==4.5.0 ; python_version >= "3.9" and python_version < "3.10"
tzdata==2023.3 ; python_version >= "3.9" and python_version < "4.0"
url-normalize==1.4.3 ; python_version >= "3.9" and python_version < "4.0"
urllib3==1.26.15 ; python_version >= "3.9" and python_version < "4.0"
wcwidth==0.2.6 ; python_version >= "3.9" and python_version < "4"
xlrd==2.0.1 ; python_version >= "3.9" and python_version < "4.0"

View File

@ -1,81 +1,25 @@
certifi==2020.6.20 \
--hash=sha256:8fc0819f1f30ba15bdb34cceffb9ef04d99f420f68eb75d901e9560b8749fc41 \
--hash=sha256:5930595817496dd21bb8dc35dad090f1c2cd0adfaf21204bf6732ca5d8ee34d3
chardet==3.0.4 \
--hash=sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691 \
--hash=sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae
idna==2.10 \
--hash=sha256:b97d804b1e9b523befed77c48dacec60e6dcb0b5391d57af6a65a312a90648c0 \
--hash=sha256:b307872f855b18632ce0c21c5e45be78c0ea7ae4c15c828c20788b26921eb3f6
langid==1.1.6 \
--hash=sha256:044bcae1912dab85c33d8e98f2811b8f4ff1213e5e9a9e9510137b84da2cb293
numpy==1.19.0 \
--hash=sha256:63d971bb211ad3ca37b2adecdd5365f40f3b741a455beecba70fd0dde8b2a4cb \
--hash=sha256:b6aaeadf1e4866ca0fdf7bb4eed25e521ae21a7947c59f78154b24fc7abbe1dd \
--hash=sha256:13af0184177469192d80db9bd02619f6fa8b922f9f327e077d6f2a6acb1ce1c0 \
--hash=sha256:356f96c9fbec59974a592452ab6a036cd6f180822a60b529a975c9467fcd5f23 \
--hash=sha256:fa1fe75b4a9e18b66ae7f0b122543c42debcf800aaafa0212aaff3ad273c2596 \
--hash=sha256:cbe326f6d364375a8e5a8ccb7e9cd73f4b2f6dc3b2ed205633a0db8243e2a96a \
--hash=sha256:a2e3a39f43f0ce95204beb8fe0831199542ccab1e0c6e486a0b4947256215632 \
--hash=sha256:7b852817800eb02e109ae4a9cef2beda8dd50d98b76b6cfb7b5c0099d27b52d4 \
--hash=sha256:d97a86937cf9970453c3b62abb55a6475f173347b4cde7f8dcdb48c8e1b9952d \
--hash=sha256:a86c962e211f37edd61d6e11bb4df7eddc4a519a38a856e20a6498c319efa6b0 \
--hash=sha256:d34fbb98ad0d6b563b95de852a284074514331e6b9da0a9fc894fb1cdae7a79e \
--hash=sha256:658624a11f6e1c252b2cd170d94bf28c8f9410acab9f2fd4369e11e1cd4e1aaf \
--hash=sha256:4d054f013a1983551254e2379385e359884e5af105e3efe00418977d02f634a7 \
--hash=sha256:26a45798ca2a4e168d00de75d4a524abf5907949231512f372b217ede3429e98 \
--hash=sha256:3c40c827d36c6d1c3cf413694d7dc843d50997ebffbc7c87d888a203ed6403a7 \
--hash=sha256:be62aeff8f2f054eff7725f502f6228298891fd648dc2630e03e44bf63e8cee0 \
--hash=sha256:dd53d7c4a69e766e4900f29db5872f5824a06827d594427cf1a4aa542818b796 \
--hash=sha256:30a59fb41bb6b8c465ab50d60a1b298d1cd7b85274e71f38af5a75d6c475d2d2 \
--hash=sha256:df1889701e2dfd8ba4dc9b1a010f0a60950077fb5242bb92c8b5c7f1a6f2668a \
--hash=sha256:33c623ef9ca5e19e05991f127c1be5aeb1ab5cdf30cb1c5cf3960752e58b599b \
--hash=sha256:26f509450db547e4dfa3ec739419b31edad646d21fb8d0ed0734188b35ff6b27 \
--hash=sha256:7b57f26e5e6ee2f14f960db46bd58ffdca25ca06dd997729b1b179fddd35f5a3 \
--hash=sha256:a8705c5073fe3fcc297fb8e0b31aa794e05af6a329e81b7ca4ffecab7f2b95ef \
--hash=sha256:c2edbb783c841e36ca0fa159f0ae97a88ce8137fb3a6cd82eae77349ba4b607b \
--hash=sha256:8cde829f14bd38f6da7b2954be0f2837043e8b8d7a9110ec5e318ae6bf706610 \
--hash=sha256:76766cc80d6128750075378d3bb7812cf146415bd29b588616f72c943c00d598
pandas==1.0.5 \
--hash=sha256:faa42a78d1350b02a7d2f0dbe3c80791cf785663d6997891549d0f86dc49125e \
--hash=sha256:9c31d52f1a7dd2bb4681d9f62646c7aa554f19e8e9addc17e8b1b20011d7522d \
--hash=sha256:8778a5cc5a8437a561e3276b85367412e10ae9fff07db1eed986e427d9a674f8 \
--hash=sha256:9871ef5ee17f388f1cb35f76dc6106d40cb8165c562d573470672f4cdefa59ef \
--hash=sha256:35b670b0abcfed7cad76f2834041dcf7ae47fd9b22b63622d67cdc933d79f453 \
--hash=sha256:c9410ce8a3dee77653bc0684cfa1535a7f9c291663bd7ad79e39f5ab58f67ab3 \
--hash=sha256:02f1e8f71cd994ed7fcb9a35b6ddddeb4314822a0e09a9c5b2d278f8cb5d4096 \
--hash=sha256:b3c4f93fcb6e97d993bf87cdd917883b7dab7d20c627699f360a8fb49e9e0b91 \
--hash=sha256:5759edf0b686b6f25a5d4a447ea588983a33afc8a0081a0954184a4a87fd0dd7 \
--hash=sha256:ab8173a8efe5418bbe50e43f321994ac6673afc5c7c4839014cf6401bbdd0705 \
--hash=sha256:13f75fb18486759da3ff40f5345d9dd20e7d78f2a39c5884d013456cec9876f0 \
--hash=sha256:5a7cf6044467c1356b2b49ef69e50bf4d231e773c3ca0558807cdba56b76820b \
--hash=sha256:ae961f1f0e270f1e4e2273f6a539b2ea33248e0e3a11ffb479d757918a5e03a9 \
--hash=sha256:f69e0f7b7c09f1f612b1f8f59e2df72faa8a6b41c5a436dde5b615aaf948f107 \
--hash=sha256:4c73f373b0800eb3062ffd13d4a7a2a6d522792fa6eb204d67a4fad0a40f03dc \
--hash=sha256:69c5d920a0b2a9838e677f78f4dde506b95ea8e4d30da25859db6469ded84fa8
pycountry==19.8.18 \
--hash=sha256:3c57aa40adcf293d59bebaffbe60d8c39976fba78d846a018dc0c2ec9c6cb3cb
python-dateutil==2.8.1 \
--hash=sha256:73ebfe9dbf22e832286dafa60473e4cd239f8592f699aa5adaf10050e6e1823c \
--hash=sha256:75bb3f31ea686f1197762692a9ee6a7550b59fc6ca3a1f4b5d7e32fb98e2da2a
python-stdnum==1.13 \
--hash=sha256:120f83d33fb8b8be1b282f20dd755a892d5facf84f54fa21f75bbd2633128160 \
--hash=sha256:3d5d4430579cba88211d3ba4855a16faff235352a25a01d6ab70024686a75823
pytz==2020.1 \
--hash=sha256:a494d53b6d39c3c6e44c3bec237336e14305e4f29bbf800b599253057fbb79ed \
--hash=sha256:c35965d010ce31b23eeb663ed3cc8c906275d6be1a34393a1d73a41febf4a048
requests==2.24.0 \
--hash=sha256:fe75cc94a9443b9246fc7049224f75604b113c36acb93f87b80ed42c44cbb898 \
--hash=sha256:b3559a131db72c33ee969480840fff4bb6dd111de7dd27c8ee1f820f4f00231b
requests-cache==0.5.2 \
--hash=sha256:813023269686045f8e01e2289cc1e7e9ae5ab22ddd1e2849a9093ab3ab7270eb \
--hash=sha256:81e13559baee64677a7d73b85498a5a8f0639e204517b5d05ff378e44a57831a
six==1.15.0 \
--hash=sha256:8b74bedcbbbaca38ff6d7491d76f2b06b3592611af620f8426e82dddb04a5ced \
--hash=sha256:30639c035cdb23534cd4aa2dd52c3bf48f06e5f4a941509c8bafd8ce11080259
urllib3==1.25.9 \
--hash=sha256:88206b0eb87e6d677d424843ac5209e3fb9d0190d0ee169599165ec25e9d9115 \
--hash=sha256:3018294ebefce6572a474f0604c2021e33b3fd8006ecd11d62107a5d2a963527
xlrd==1.2.0 \
--hash=sha256:e551fb498759fa3a5384a94ccd4c3c02eb7c00ea424426e212ac0c57be9dfbde \
--hash=sha256:546eb36cee8db40c3eaa46c351e67ffee6eeb5fa2650b71bc4c758a29a1b29b2
appdirs==1.4.4 ; python_version >= "3.9" and python_version < "4.0"
attrs==23.1.0 ; python_version >= "3.9" and python_version < "4.0"
cattrs==22.2.0 ; python_version >= "3.9" and python_version < "4.0"
certifi==2022.12.7 ; python_version >= "3.9" and python_version < "4.0"
charset-normalizer==3.1.0 ; python_version >= "3.9" and python_version < "4.0"
colorama==0.4.6 ; python_version >= "3.9" and python_version < "4.0"
country-converter==1.0.0 ; python_version >= "3.9" and python_version < "4.0"
exceptiongroup==1.1.1 ; python_version >= "3.9" and python_version < "3.11"
ftfy==6.1.1 ; python_version >= "3.9" and python_version < "4"
idna==3.4 ; python_version >= "3.9" and python_version < "4.0"
langid==1.1.6 ; python_version >= "3.9" and python_version < "4.0"
numpy==1.24.3 ; python_version >= "3.9" and python_version < "4.0"
pandas==2.0.1 ; python_version >= "3.9" and python_version < "4.0"
pyarrow==11.0.0 ; python_version >= "3.9" and python_version < "4.0"
pycountry @ git+https://github.com/alanorth/pycountry@iso-codes-4.13.0 ; python_version >= "3.9" and python_version < "4.0"
python-dateutil==2.8.2 ; python_version >= "3.9" and python_version < "4.0"
python-stdnum==1.18 ; python_version >= "3.9" and python_version < "4.0"
pytz==2023.3 ; python_version >= "3.9" and python_version < "4.0"
requests-cache==0.9.8 ; python_version >= "3.9" and python_version < "4.0"
requests==2.29.0 ; python_version >= "3.9" and python_version < "4.0"
six==1.16.0 ; python_version >= "3.9" and python_version < "4.0"
tzdata==2023.3 ; python_version >= "3.9" and python_version < "4.0"
url-normalize==1.4.3 ; python_version >= "3.9" and python_version < "4.0"
urllib3==1.26.15 ; python_version >= "3.9" and python_version < "4.0"
wcwidth==0.2.6 ; python_version >= "3.9" and python_version < "4"

View File

@ -1,6 +0,0 @@
[isort]
multi_line_output=3
include_trailing_comma=True
force_grid_wrap=0
use_parentheses=True
line_length=88

View File

@ -1,38 +0,0 @@
import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
install_requires = [
"pandas",
"python-stdnum",
"requests",
"requests-cache",
"pycountry",
"langid",
]
setuptools.setup(
name="csv-metadata-quality",
version="0.4.2",
author="Alan Orth",
author_email="aorth@mjanja.ch",
description="A simple, but opinionated CSV quality checking and fixing pipeline for CSVs in the DSpace ecosystem.",
license="GPLv3",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/alanorth/csv-metadata-quality",
classifiers=[
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
],
packages=["csv_metadata_quality"],
entry_points={
"console_scripts": ["csv-metadata-quality = csv_metadata_quality.__main__:main"]
},
install_requires=install_requires,
)

View File

@ -1,4 +1,7 @@
# SPDX-License-Identifier: GPL-3.0-only
import pandas as pd
from colorama import Fore
import csv_metadata_quality.check as check
import csv_metadata_quality.experimental as experimental
@ -12,7 +15,7 @@ def test_check_invalid_issn(capsys):
check.issn(value)
captured = capsys.readouterr()
assert captured.out == f"Invalid ISSN: {value}\n"
assert captured.out == f"{Fore.RED}Invalid ISSN: {Fore.RESET}{value}\n"
def test_check_valid_issn():
@ -22,7 +25,7 @@ def test_check_valid_issn():
result = check.issn(value)
assert result == value
assert result is None
def test_check_invalid_isbn(capsys):
@ -33,7 +36,7 @@ def test_check_invalid_isbn(capsys):
check.isbn(value)
captured = capsys.readouterr()
assert captured.out == f"Invalid ISBN: {value}\n"
assert captured.out == f"{Fore.RED}Invalid ISBN: {Fore.RESET}{value}\n"
def test_check_valid_isbn():
@ -43,32 +46,7 @@ def test_check_valid_isbn():
result = check.isbn(value)
assert result == value
def test_check_invalid_separators(capsys):
"""Test checking invalid multi-value separators."""
value = "Alan|Orth"
field_name = "dc.contributor.author"
check.separators(value, field_name)
captured = capsys.readouterr()
assert captured.out == f"Invalid multi-value separator ({field_name}): {value}\n"
def test_check_valid_separators():
"""Test checking valid multi-value separators."""
value = "Alan||Orth"
field_name = "dc.contributor.author"
result = check.separators(value, field_name)
assert result == value
assert result is None
def test_check_missing_date(capsys):
@ -81,7 +59,7 @@ def test_check_missing_date(capsys):
check.date(value, field_name)
captured = capsys.readouterr()
assert captured.out == f"Missing date ({field_name}).\n"
assert captured.out == f"{Fore.RED}Missing date ({field_name}).{Fore.RESET}\n"
def test_check_multiple_dates(capsys):
@ -94,7 +72,10 @@ def test_check_multiple_dates(capsys):
check.date(value, field_name)
captured = capsys.readouterr()
assert captured.out == f"Multiple dates not allowed ({field_name}): {value}\n"
assert (
captured.out
== f"{Fore.RED}Multiple dates not allowed ({field_name}): {Fore.RESET}{value}\n"
)
def test_check_invalid_date(capsys):
@ -107,7 +88,9 @@ def test_check_invalid_date(capsys):
check.date(value, field_name)
captured = capsys.readouterr()
assert captured.out == f"Invalid date ({field_name}): {value}\n"
assert (
captured.out == f"{Fore.RED}Invalid date ({field_name}): {Fore.RESET}{value}\n"
)
def test_check_valid_date():
@ -119,7 +102,7 @@ def test_check_valid_date():
result = check.date(value, field_name)
assert result == value
assert result is None
def test_check_suspicious_characters(capsys):
@ -132,7 +115,10 @@ def test_check_suspicious_characters(capsys):
check.suspicious_characters(value, field_name)
captured = capsys.readouterr()
assert captured.out == f"Suspicious character ({field_name}): ˆt\n"
assert (
captured.out
== f"{Fore.YELLOW}Suspicious character ({field_name}): {Fore.RESET}ˆt\n"
)
def test_check_valid_iso639_1_language():
@ -142,7 +128,7 @@ def test_check_valid_iso639_1_language():
result = check.language(value)
assert result == value
assert result is None
def test_check_valid_iso639_3_language():
@ -152,7 +138,7 @@ def test_check_valid_iso639_3_language():
result = check.language(value)
assert result == value
assert result is None
def test_check_invalid_iso639_1_language(capsys):
@ -163,7 +149,9 @@ def test_check_invalid_iso639_1_language(capsys):
check.language(value)
captured = capsys.readouterr()
assert captured.out == f"Invalid ISO 639-1 language: {value}\n"
assert (
captured.out == f"{Fore.RED}Invalid ISO 639-1 language: {Fore.RESET}{value}\n"
)
def test_check_invalid_iso639_3_language(capsys):
@ -174,7 +162,9 @@ def test_check_invalid_iso639_3_language(capsys):
check.language(value)
captured = capsys.readouterr()
assert captured.out == f"Invalid ISO 639-3 language: {value}\n"
assert (
captured.out == f"{Fore.RED}Invalid ISO 639-3 language: {Fore.RESET}{value}\n"
)
def test_check_invalid_language(capsys):
@ -185,30 +175,57 @@ def test_check_invalid_language(capsys):
check.language(value)
captured = capsys.readouterr()
assert captured.out == f"Invalid language: {value}\n"
assert captured.out == f"{Fore.RED}Invalid language: {Fore.RESET}{value}\n"
def test_check_invalid_agrovoc(capsys):
"""Test invalid AGROVOC subject."""
"""Test invalid AGROVOC subject. Invalid values *will not* be dropped."""
value = "FOREST"
field_name = "dc.subject"
valid_agrovoc = "LIVESTOCK"
invalid_agrovoc = "FOREST"
value = f"{valid_agrovoc}||{invalid_agrovoc}"
field_name = "dcterms.subject"
drop = False
check.agrovoc(value, field_name)
new_value = check.agrovoc(value, field_name, drop)
captured = capsys.readouterr()
assert captured.out == f"Invalid AGROVOC ({field_name}): {value}\n"
assert (
captured.out
== f"{Fore.RED}Invalid AGROVOC ({field_name}): {Fore.RESET}{invalid_agrovoc}\n"
)
assert new_value == value
def test_check_invalid_agrovoc_dropped(capsys):
"""Test invalid AGROVOC subjects. Invalid values *will* be dropped."""
valid_agrovoc = "LIVESTOCK"
invalid_agrovoc = "FOREST"
value = f"{valid_agrovoc}||{invalid_agrovoc}"
field_name = "dcterms.subject"
drop = True
new_value = check.agrovoc(value, field_name, drop)
captured = capsys.readouterr()
assert (
captured.out
== f"{Fore.GREEN}Dropping invalid AGROVOC ({field_name}): {Fore.RESET}{invalid_agrovoc}\n"
)
assert new_value == valid_agrovoc
def test_check_valid_agrovoc():
"""Test valid AGROVOC subject."""
value = "FORESTS"
field_name = "dc.subject"
field_name = "dcterms.subject"
drop = False
result = check.agrovoc(value, field_name)
result = check.agrovoc(value, field_name, drop)
assert result == value
assert result == "FORESTS"
def test_check_uncommon_filename_extension(capsys):
@ -219,7 +236,10 @@ def test_check_uncommon_filename_extension(capsys):
check.filename_extension(value)
captured = capsys.readouterr()
assert captured.out == f"Filename with uncommon extension: {value}\n"
assert (
captured.out
== f"{Fore.YELLOW}Filename with uncommon extension: {Fore.RESET}{value}\n"
)
def test_check_common_filename_extension():
@ -229,7 +249,7 @@ def test_check_common_filename_extension():
result = check.filename_extension(value)
assert result == value
assert result is None
def test_check_incorrect_iso_639_1_language(capsys):
@ -237,17 +257,18 @@ def test_check_incorrect_iso_639_1_language(capsys):
title = "A randomised vaccine field trial in Kenya demonstrates protection against wildebeest-associated malignant catarrhal fever in cattle"
language = "es"
exclude = []
# Create a dictionary to mimic Pandas series
row = {"dc.title": title, "dc.language.iso": language}
series = pd.Series(row)
experimental.correct_language(series)
experimental.correct_language(series, exclude)
captured = capsys.readouterr()
assert (
captured.out
== f"Possibly incorrect language {language} (detected en): {title}\n"
== f"{Fore.YELLOW}Possibly incorrect language {language} (detected en): {Fore.RESET}{title}\n"
)
@ -256,17 +277,18 @@ def test_check_incorrect_iso_639_3_language(capsys):
title = "A randomised vaccine field trial in Kenya demonstrates protection against wildebeest-associated malignant catarrhal fever in cattle"
language = "spa"
exclude = []
# Create a dictionary to mimic Pandas series
row = {"dc.title": title, "dc.language.iso": language}
series = pd.Series(row)
experimental.correct_language(series)
experimental.correct_language(series, exclude)
captured = capsys.readouterr()
assert (
captured.out
== f"Possibly incorrect language {language} (detected eng): {title}\n"
== f"{Fore.YELLOW}Possibly incorrect language {language} (detected eng): {Fore.RESET}{title}\n"
)
@ -275,14 +297,15 @@ def test_check_correct_iso_639_1_language():
title = "A randomised vaccine field trial in Kenya demonstrates protection against wildebeest-associated malignant catarrhal fever in cattle"
language = "en"
exclude = []
# Create a dictionary to mimic Pandas series
row = {"dc.title": title, "dc.language.iso": language}
series = pd.Series(row)
result = experimental.correct_language(series)
result = experimental.correct_language(series, exclude)
assert result == language
assert result is None
def test_check_correct_iso_639_3_language():
@ -290,11 +313,202 @@ def test_check_correct_iso_639_3_language():
title = "A randomised vaccine field trial in Kenya demonstrates protection against wildebeest-associated malignant catarrhal fever in cattle"
language = "eng"
exclude = []
# Create a dictionary to mimic Pandas series
row = {"dc.title": title, "dc.language.iso": language}
series = pd.Series(row)
result = experimental.correct_language(series)
result = experimental.correct_language(series, exclude)
assert result == language
assert result is None
def test_check_valid_spdx_license_identifier():
"""Test valid SPDX license identifier."""
license = "CC-BY-SA-4.0"
result = check.spdx_license_identifier(license)
assert result is None
def test_check_invalid_spdx_license_identifier(capsys):
"""Test invalid SPDX license identifier."""
license = "CC-BY-SA"
check.spdx_license_identifier(license)
captured = capsys.readouterr()
assert (
captured.out
== f"{Fore.YELLOW}Non-SPDX license identifier: {Fore.RESET}{license}\n"
)
def test_check_duplicate_item(capsys):
"""Test item with duplicate title, type, and date."""
item_title = "Title"
item_type = "Report"
item_date = "2021-03-17"
d = {
"dc.title": [item_title, item_title],
"dcterms.type": [item_type, item_type],
"dcterms.issued": [item_date, item_date],
}
df = pd.DataFrame(data=d)
check.duplicate_items(df)
captured = capsys.readouterr()
assert (
captured.out
== f"{Fore.YELLOW}Possible duplicate (dc.title): {Fore.RESET}{item_title}\n"
)
def test_check_no_mojibake():
"""Test string with no mojibake."""
field = "CIAT Publicaçao"
field_name = "dcterms.isPartOf"
result = check.mojibake(field, field_name)
assert result is None
def test_check_mojibake(capsys):
"""Test string with mojibake."""
field = "CIAT Publicaçao"
field_name = "dcterms.isPartOf"
check.mojibake(field, field_name)
captured = capsys.readouterr()
assert (
captured.out
== f"{Fore.YELLOW}Possible encoding issue ({field_name}): {Fore.RESET}{field}\n"
)
def test_check_doi_field():
"""Test an item with a DOI field."""
doi = "https://doi.org/10.1186/1743-422X-9-218"
citation = "Orth, A. 2021. Testing all the things. doi: 10.1186/1743-422X-9-218"
# Emulate a column in a transposed dataframe (which is just a series), with
# the citation and a DOI field.
d = {"cg.identifier.doi": doi, "dcterms.bibliographicCitation": citation}
series = pd.Series(data=d)
exclude = []
result = check.citation_doi(series, exclude)
assert result is None
def test_check_doi_only_in_citation(capsys):
"""Test an item with a DOI in its citation, but no DOI field."""
citation = "Orth, A. 2021. Testing all the things. doi: 10.1186/1743-422X-9-218"
exclude = []
# Emulate a column in a transposed dataframe (which is just a series), with
# an empty DOI field and a citation containing a DOI.
d = {"cg.identifier.doi": None, "dcterms.bibliographicCitation": citation}
series = pd.Series(data=d)
check.citation_doi(series, exclude)
captured = capsys.readouterr()
assert (
captured.out
== f"{Fore.YELLOW}DOI in citation, but missing a DOI field: {Fore.RESET}{citation}\n"
)
def test_title_in_citation():
"""Test an item with its title in the citation."""
title = "Testing all the things"
citation = "Orth, A. 2021. Testing all the things."
exclude = []
# Emulate a column in a transposed dataframe (which is just a series), with
# the title and citation.
d = {"dc.title": title, "dcterms.bibliographicCitation": citation}
series = pd.Series(data=d)
result = check.title_in_citation(series, exclude)
assert result is None
def test_title_not_in_citation(capsys):
"""Test an item with its title missing from the citation."""
title = "Testing all the things"
citation = "Orth, A. 2021. Testing all teh things."
exclude = []
# Emulate a column in a transposed dataframe (which is just a series), with
# the title and citation.
d = {"dc.title": title, "dcterms.bibliographicCitation": citation}
series = pd.Series(data=d)
check.title_in_citation(series, exclude)
captured = capsys.readouterr()
assert (
captured.out
== f"{Fore.YELLOW}Title is not present in citation: {Fore.RESET}{title}\n"
)
def test_country_matches_region():
"""Test an item with regions matching its country list."""
country = "Kenya"
region = "Eastern Africa"
exclude = []
# Emulate a column in a transposed dataframe (which is just a series)
d = {"cg.coverage.country": country, "cg.coverage.region": region}
series = pd.Series(data=d)
result = check.countries_match_regions(series, exclude)
assert result is None
def test_country_not_matching_region(capsys):
"""Test an item with regions not matching its country list."""
title = "Testing an item with no matching region."
country = "Kenya"
region = ""
missing_region = "Eastern Africa"
exclude = []
# Emulate a column in a transposed dataframe (which is just a series)
d = {
"dc.title": title,
"cg.coverage.country": country,
"cg.coverage.region": region,
}
series = pd.Series(data=d)
check.countries_match_regions(series, exclude)
captured = capsys.readouterr()
assert (
captured.out
== f"{Fore.YELLOW}Missing region ({country} → {missing_region}): {Fore.RESET}{title}\n"
)

View File

@ -1,3 +1,7 @@
# SPDX-License-Identifier: GPL-3.0-only
import pandas as pd
import csv_metadata_quality.fix as fix
@ -41,6 +45,16 @@ def test_fix_invalid_separators():
assert fix.separators(value, field_name) == "Alan||Orth"
def test_fix_unnecessary_separators():
"""Test fixing unnecessary multi-value separators."""
field = "Alan||Orth||"
field_name = "dc.contributor.author"
assert fix.separators(field, field_name) == "Alan||Orth"
def test_fix_unnecessary_unicode():
"""Test fixing unnecessary Unicode."""
@ -64,8 +78,9 @@ def test_fix_newlines():
value = """Ken
ya"""
field_name = "dcterms.subject"
assert fix.newlines(value) == "Kenya"
assert fix.newlines(value, field_name) == "Kenya"
def test_fix_comma_space():
@ -98,3 +113,50 @@ def test_fix_decomposed_unicode():
field_name = "dc.contributor.author"
assert fix.normalize_unicode(value, field_name) == "Ouédraogo, Mathieu"
def test_fix_mojibake():
"""Test string with no mojibake."""
field = "CIAT Publicaçao"
field_name = "dcterms.isPartOf"
assert fix.mojibake(field, field_name) == "CIAT Publicaçao"
def test_fix_country_not_matching_region():
"""Test an item with regions not matching its country list."""
title = "Testing an item with no matching region."
country = "Kenya"
region = ""
missing_region = "Eastern Africa"
exclude = []
# Emulate a column in a transposed dataframe (which is just a series)
d = {
"dc.title": title,
"cg.coverage.country": country,
"cg.coverage.region": region,
}
series = pd.Series(data=d)
result = fix.countries_match_regions(series, exclude)
# Emulate the correct series we are expecting
d_correct = {
"dc.title": title,
"cg.coverage.country": country,
"cg.coverage.region": missing_region,
}
series_correct = pd.Series(data=d_correct)
pd.testing.assert_series_equal(result, series_correct)
def test_fix_normalize_dois():
"""Test normalizing a DOI."""
value = "doi: 10.11648/j.jps.20140201.14"
assert fix.normalize_dois(value) == "https://doi.org/10.11648/j.jps.20140201.14"