Compare commits

..

No commits in common. "9951d6598d7136d5117fd9d8939c31cda4423d98" and "362c7f87797951ef8923faf30b3fd9653e08e7a6" have entirely different histories.

3 changed files with 11 additions and 52 deletions

View File

@ -8,14 +8,14 @@ The ISEAL Core Metadata Set is maintained primarily in CSV format. This decision
- The ISEAL Core Metadata Set, which lives in `data/iseal-core.csv` - The ISEAL Core Metadata Set, which lives in `data/iseal-core.csv`
- The FSC<sup>®</sup> extension, which lives in `data/fsc.csv` - The FSC<sup>®</sup> extension, which lives in `data/fsc.csv`
From the CSV we use a series of Python scripts to create the RDF ([TTL](https://en.wikipedia.org/wiki/Turtle_(syntax))) representations of the schema as well as the HTML documentation site. All of this is automated using GitHub Actions (see `.github/workflows`) whenever there is a new commit in the repository. You should only need to follow the documentation here if you want to work on the workflow locally or make larger changes. In that case, continue reading... From the CSV we use a series of Python scripts to create the RDF ([TTL](https://en.wikipedia.org/wiki/Turtle_(syntax))) representations of the schema as well as the HTML documentation site. All of this is automated using GitHub Actions (see `.github/workflows`) whenever there is a new commit in the repository. Everything should Just Work<sup></sup> so you should only need to follow the documentation here if you want to work on the workflow locally or make larger changes. In that case, continue reading...
## Technical Requirements ## General Requirements
- Python 3.8+ — to parse the CSV schemas, generate the RDF files, and populate the documentation site content - Python 3.8+
- Node.js 12+ and NPM — to generate the documentation site HTML - Node.js 12+ and NPM
### Python Setup ## Python Setup
Create a Python virtual environment and install the requirements: Create a Python virtual environment and install the requirements:
```console ```console
@ -24,57 +24,18 @@ $ source virtualenv/bin/activate
$ pip install -r requirements.txt $ pip install -r requirements.txt
``` ```
Once you have the Python environment set up you will be able to use the utility scripts: Then run the utility scripts to parse the schemas:
- `./util/generate-hugo-content.py` — to parse the CSV schemas and controlled vocabularies, then populate the documentation site content
- `./util/create-rdf.py` — to parse the CSV schemas and create the RDF (TTL) files
If you have made modifications to the CSV schemas—adding elements, changing descriptions, etc—and you want to test them locally before pushing to GitHub, then you will need to re-run the utility scripts:
```console ```console
$ python ./util/generate-hugo-content.py -i ./data/iseal-core.csv --clean -d $ ./util/generate-hugo-content.py -i ./data/iseal-core.csv --clean -d
$ python ./util/generate-hugo-content.py -i ./data/fsc.csv -d $ ./util/generate-hugo-content.py -i data/fsc.csv -d
$ python ./util/create-rdf.py
``` ```
Assuming these scripts ran without crashing, you can check your `git status` to see if anything was updated and then proceed to regenerating the documentation site HTML. ## Node.js Setup
To generate the HTML documentation site:
### Node.js Setup
Install the web tooling and dependencies required to build the site:
```console ```console
$ cd site $ cd site
$ npm install $ npm install
```
The Python scripts above only populated the *content* for the documentation site. To regenerate the actual HTML for the documentation site you must run the `npm build` script:
```console
$ npm run build $ npm run build
``` ```
Alternatively, you can view the site locally using the `npm run server` command:
```console
$ npm run server
```
The site will be built in memory and available at: http://localhost:1313/iseal-core/
## Workflows
These are some common, basic workflows:
- Add new metadata element(s) → re-run Python scripts and regenerate documentation site
- Update metadata descriptions → re-run Python scripts and regenerate documentation site
- Update controlled vocabularies → re-run Python scripts and regenerate documentation site
These are advanced workflows:
- Change documentation site layout
- Requires editing templates in `site/layouts` and regenerating documentation site
- Change documentation site style
- Requires editing styles in `site/source/scss` and regenerating documentation site
- Add a new schema extension
- Requires editing Python utility scripts
- Requires editing styles in `site/source/scss` and regenerating documentation site
- Requires editing templates in `site/layouts` and regenerating documentation site

View File

@ -22,11 +22,9 @@ Consult [`README-dev.md`](README-dev.md) for technical information about making
- Repository - Repository
- Add more information and instructions to README.md - Add more information and instructions to README.md
- Update GitHub Actions once `util/create-rdf.py` is fixed
- Schema - Schema
- Remove combined "latLong" fields (they can be inferred from the separate fields) - Remove combined "latLong" fields (they can be inferred from the separate fields)
- Remove controlled vocabularies from the schema CSVs - Remove controlled vocabularies from the schema CSVs
- Update `util/create-rdf.py`
- Site - Site
- Change "Suggested element" to "DSpace mapping"? - Change "Suggested element" to "DSpace mapping"?

View File

@ -90,7 +90,7 @@ def parseSchema(schema_df):
cardinality = row["element options"].capitalize() cardinality = row["element options"].capitalize()
prop_type = row["element type"].capitalize() prop_type = row["element type"].capitalize()
if os.path.isfile(f"data/controlled-vocabularies/{element_name_safe}.txt"): if row["element controlled values or terms"]:
controlled_vocab = True controlled_vocab = True
controlled_vocabulary_src = ( controlled_vocabulary_src = (