On Linux and macOS we can run these scripts directly because they
have executable permissions and the shebang line points to a sane
Python, but on Windows only God can help us. Better to write the
slightly lamer invocation here with Python directly (even though
the slash style will be different)...
We are planning to remove the controlled vocabularies from the CSV
files so we should not expect that this column will exist. Instead,
check if there is a controlled vocabulary in the data directory.
The controlled vocabularies were already exported once using the
util/export-controlled-vocabularies.py script so we don't actually
need them in the CSVs anymore.
Most of these are minor and would have been selected by the semant-
ic version string during `poetry install`, but I want to make sure
that they are as current as possible before I leave the project.
In the case of Pandas 1.4.0 the minimum Python version is actually
only 3.8, so let's set that as the minimum.
I need to start collecting and organizing documentation for various
technical workflows here. For example:
- Adding a new schema element
- Adding a new schema extension
- Updating controlled vocabularies
- Updating the documentation site layout
Etc...
Generated with poetry:
$ poetry export -f requirements.txt > requirements.txt
This is useful for people who don't have poetry and will use python
with vanilla virtual environments.
Detect actual HTTP return codes for various situations:
- HTTP 500 means the schema or field already exists
- HTTP 415 means we are posting some invalid data
- HTTP 404 means the parent schema does not exist
It's what I use locally and it's the latest supported version so it
will let this project work a bit longer as time goes on if we start
with the latest version possible.
Don't automatically commit changes to the Hugo site content unless
we are on the main branch. This is unnecessary and makes it tricky
to work on feature branches or pull requests.
Now they are named using their DSpace field name. I generated them
like this:
$ ./util/export-controlled-vocabularies.py -i data/iseal-core.csv --clean -d
$ ./util/export-controlled-vocabularies.py -i data/fsc.csv -d
This script is only used to export the controlled vocabularies from
the schema CSV files. Eventually we will remove them from there and
it won't be needed anymore.