mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2024-12-20 20:22:19 +01:00
26 KiB
26 KiB
title | date | author | categories | |
---|---|---|---|---|
January, 2020 | 2020-01-06T10:48:30+02:00 | Alan Orth |
|
2020-01-06
- Open a ticket with Atmire to request a quote for the upgrade to DSpace 6
- Last week Altmetric responded about the item that had a lower score than than its DOI
2020-01-07
- Peter Ballantyne highlighted one more WLE item that is missing the Altmetric score that its DOI has
- The DOI has a score of 259, but the Handle has no score at all
- I tweeted the CGSpace repository link
2020-01-08
- Export a list of authors from CGSpace for Peter Ballantyne to look through and correct:
dspace=# \COPY (SELECT DISTINCT text_value as "dc.contributor.author", count(*) FROM metadatavalue WHERE resource_type_id = 2 AND metadata_field_id = 3 GROUP BY text_value ORDER BY count DESC) to /tmp/2020-01-08-authors.csv WITH CSV HEADER;
COPY 68790
- As I always have encoding issues with files Peter sends, I tried to convert it to some Windows encoding, but got an error:
$ iconv -f utf-8 -t windows-1252 /tmp/2020-01-08-authors.csv -o /tmp/2020-01-08-authors-windows.csv
iconv: illegal input sequence at position 104779
- According to this trick the troublesome character is on line 5227:
$ awk 'END {print NR": "$0}' /tmp/2020-01-08-authors-windows.csv
5227: "Oue
$ sed -n '5227p' /tmp/2020-01-08-authors.csv | xxd -c1
00000000: 22 "
00000001: 4f O
00000002: 75 u
00000003: 65 e
00000004: cc .
00000005: 81 .
00000006: 64 d
00000007: 72 r
According to the blog post linked above the troublesome character is probably the "High Octect Preset" (81), which vim identifies (usingga
on the character) as:
<e> 101, Hex 65, Octal 145 < ́> 769, Hex 0301, Octal 1401
- If I understand the situation correctly it sounds like this means that the character is not actually encoded as UTF-8, so it's stored incorrectly in the database...
- Other encodings like
windows-1251
andwindows-1257
also fail on different characters like "ž" and "é" that are legitimate UTF-8 characters - Then there is the issue of Russian, Chinese, etc characters, which are simply not representable in any of those encodings
- I think the solution is to upload it to Google Docs, or just send it to him and deal with each case manually in the corrections he sends me
- Re-deploy DSpace Test (linode19) with a fresh snapshot of the CGSpace database and assetstore, and using the
5_x-prod
(no CG Core v2) branch
2020-01-14
- I checked the yearly Solr statistics sharding cron job that should have run on 2020-01 on CGSpace (linode18) and saw that there was an error
- I manually ran it on the server as the DSpace user and it said "Moving: 51633080 into core statistics-2019"
- After a few hours it died with the same error that I had seen in the log from the first run:
Exception: Read timed out
java.net.SocketTimeoutException: Read timed out
- I am not sure how I will fix that shard...
- I discovered a very interesting tool called ftfy that attempts to fix errors in UTF-8
- I'm curious to start checking input files with this to see what it highlights
- I ran it on the authors file from last week and it converted characters like those with Spanish accents from multi-byte sequences (I don't know what it's called?) to digraphs (é→é), which vim identifies as:
<e> 101, Hex 65, Octal 145 < ́> 769, Hex 0301, Octal 1401
<é> 233, Hex 00e9, Oct 351, Digr e'
- Ah hah! We need to be normalizing characters into their canonical forms!
- In Python 3.8 we can even check if the string is normalized using the
unicodedata
library:
- In Python 3.8 we can even check if the string is normalized using the
In [7]: unicodedata.is_normalized('NFC', 'é')
Out[7]: False
In [8]: unicodedata.is_normalized('NFC', 'é')
Out[8]: True
2020-01-15
- I added support for Unicode normalization to my csv-metadata-quality tool in v0.4.0
- Generate ILRI and Bioversity subject lists for Elizabeth Arnaud from Bioversity:
dspace=# \COPY (SELECT DISTINCT text_value as "cg.subject.ilri", count(*) FROM metadatavalue WHERE resource_type_id = 2 AND metadata_field_id = 203 GROUP BY text_value ORDER BY count DESC) to /tmp/2020-01-15-ilri-subjects.csv WITH CSV HEADER;
COPY 144
dspace=# \COPY (SELECT DISTINCT text_value as "cg.subject.bioversity", count(*) FROM metadatavalue WHERE resource_type_id = 2 AND metadata_field_id = 120 GROUP BY text_value ORDER BY count DESC) to /tmp/2020-01-15-bioversity-subjects.csv WITH CSV HEADER;
COPY 1325
- She will be meeting with FAO and will look over the terms to see if they can add some to AGROVOC
- I noticed a few errors in the ILRI subjects so I fixed them locally and on CGSpace (linode18) using my
fix-metadata.py
script:
$ ./fix-metadata-values.py -i 2020-01-15-fix-8-ilri-subjects.csv -db dspace -u dspace -p 'fuuu' -f cg.subject.ilri -m 203 -t correct -d
2020-01-16
- Extract a list of CIAT subjects from CGSpace for Elizabeth Arnaud from Bioversity:
dspace=# \COPY (SELECT DISTINCT text_value as "cg.subject.ciat", count(*) FROM metadatavalue WHERE resource_type_id = 2 AND metadata_field_id = 122 GROUP BY text_value ORDER BY count DESC) to /tmp/2020-01-16-ciat-subjects.csv WITH CSV HEADER;
COPY 35
- Start examining the 175 IITA records that Bosede originally sent in October, 2019 (201907.xls)
- We had delayed processing them because DSpace Test (linode19) was testing CG Core v2 implementation for the last few months
- Sisay uploaded the records to DSpace Test as IITA_201907_Jan13
- I started first with basic sanity checks using my csv-metadata-quality tool and found twenty-two items with extra whitespace, invalid multi-value separators, and duplicates, which means Sisay did not do any quality checking on the data
- I corrected one invalid AGROVOC subject
- Validate and normalize affiliations against our 2019-04 list using reconcile-csv and OpenRefine:
$ lein run ~/src/git/DSpace/2019-04-08-affiliations.csv name id
- I always forget how to copy the reconciled values in OpenRefine, but you need to make a new column and populate it using this GREL:
if(cell.recon.matched, cell.recon.match.name, value)
2020-01-20
- Last week Atmire sent a quotation for the DSpace 6 upgrade that I had requested a few weeks ago
- I forwarded it to Peter et al for their comment
- We decided that we should probably buy enough credits to cover the upgrade and have 100 remaining for future development
- Visit CodeObia to discuss the next phase of AReS development
2020-01-21
- Create two accounts on CGSpace for CTA users
- Marie-Angelique finally responded to some of the pull requests I made on the CG Core v2 repository last month:
- Merged: HTML syntax fixes
- Merged: Add LICENSE file
- Merged: Build main.css using npm build
- Approved a wider scope for
cg.peer-reviewed
(renaming the field and using non-boolean values), but there is more discussion needed
- I opened a new pull request on the cg-core repository validate and fix the formatting of the HTML files
- Create more issues for OpenRXV:
- Based on Peter's feedback on the text for labels and tooltips
- Based on Peter's feedback for the export icon
- Based on Peter's feedback for the sort options
- Based on Abenet's feedback that PDF and Word exports are not working
2020-01-22
- I tried to create a MaxMind account so I can download the GeoLite2-City database with a license key, but their server refuses to accept me:
Sorry, we were not able to create your account. Please ensure that you are using an email that is not disposable, and that you are not connecting via a proxy or VPN.
- They started limiting public access to the database in December, 2019 due to GDPR and CCPA
- This will be a problem in the future (see DS-4409)
- Peter sent me his corrections for the list of authors that I had sent him earlier in the month
- There were encoding issues when I checked the file in vim and using Python-based tools, but OpenRefine was able to read and export it as UTF-8
- I will apply them on CGSpace and DSpace Test using my
fix-metadata-values.py
script:
$ ./fix-metadata-values.py -i /tmp/2020-01-08-fix-2302-authors.csv -db dspace -u dspace -p 'fuuu' -f dc.contributor.author -m 3 -t correct -d
- Then I decided to export them again (with two author columns) so I can perform the new Unicode normalization mode I added to csv-metadata-quality:
dspace=# \COPY (SELECT DISTINCT text_value as "dc.contributor.author", count(*) FROM metadatavalue WHERE resource_type_id = 2 AND metadata_field_id = 3 GROUP BY text_value ORDER BY count DESC) to /tmp/2020-01-22-authors.csv WITH CSV HEADER;
COPY 67314
dspace=# \q
$ csv-metadata-quality -i /tmp/2020-01-22-authors.csv -o /tmp/authors-normalized.csv -u --exclude-fields 'dc.date.issued,dc.date.issued[],dc.contributor.author'
$ ./fix-metadata-values.py -i /tmp/authors-normalized.csv -db dspace -u dspace -p 'fuuu' -f dc.contributor.author -m 3 -t correct
- Peter asked me to send him a list of affiliations to correct
- First I decided to export them and run the Unicode normalizations and syntax checks with csv-metadata-quality and re-import the cleaned up values:
dspace=# \COPY (SELECT DISTINCT text_value as "cg.contributor.affiliation", text_value as "correct", count(*) FROM metadatavalue WHERE resource_type_id = 2 AND metadata_field_id = 211 GROUP BY text_value ORDER BY count DESC) to /tmp/2020-01-22-affiliations.csv WITH CSV HEADER;
COPY 6170
dspace=# \q
$ csv-metadata-quality -i /tmp/2020-01-22-affiliations.csv -o /tmp/affiliations-normalized.csv -u --exclude-fields 'dc.date.issued,dc.date.issued[],cg.contributor.affiliation'
$ ./fix-metadata-values.py -i /tmp/affiliations-normalized.csv -db dspace -u dspace -p 'fuuu' -f cg.contributor.affiliation -m 211 -t correct -n
- I applied the corrections on DSpace Test and CGSpace, and then scheduled a full Discovery reindex for later tonight:
$ sleep 4h && time schedtool -D -e ionice -c2 -n7 nice -n19 dspace index-discovery -b
- Then I generated a new list for Peter:
dspace=# \COPY (SELECT DISTINCT text_value as "cg.contributor.affiliation", count(*) FROM metadatavalue WHERE resource_type_id = 2 AND metadata_field_id = 211 GROUP BY text_value ORDER BY count DESC) to /tmp/2020-01-22-affiliations.csv WITH CSV HEADER;
COPY 6162
- Abenet said she noticed that she gets different results on AReS and Atmire Listing and Reports, for example with author "Hung, Nguyen"
- I generated a report for 2019 and 2020 with each and I see there are indeed ten more Handles in the results from L&R:
$ in2csv AReS-1-801dd394-54b5-436c-ad09-4f2e25f7e62e.xlsx | sed -E 's/10568 ([0-9]+)/10568\/\1/' | csvcut -c Handle | grep -v Handle | sort -u > hung-nguyen-ares-handles.txt
$ grep -oE '10568\/[0-9]+' hung-nguyen-atmire.txt | sort -u > hung-nguyen-atmire-handles.txt
$ wc -l hung-nguyen-a*handles.txt
46 hung-nguyen-ares-handles.txt
56 hung-nguyen-atmire-handles.txt
102 total
- Comparing the lists of items, I see that nine of the ten missing items were added less than twenty-four hours ago, and the other was added last week, so they apparently just haven't been indexed yet
- I am curious to check tomorrow to see if they are there
2020-01-23
- I checked AReS and I see that there are now 55 items for author "Hung Nguyen-Viet"
- Linode sent an alert that the outbound traffic rate of CGSpace (linode18) was high for several hours this morning around 5AM UTC+1
- I checked the nginx logs this morning for the few hours before and after that using goaccess:
# cat /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E "23/Jan/2020:0[12345678]" | goaccess --log-format=COMBINED -
- The top two hosts according to the amount of data transferred are:
- 2a01:7e00::f03c:91ff:fe9a:3a37
- 2a01:7e00::f03c:91ff:fe18:7396
- Both are on Linode, and appear to be the new and old ilri.org servers
- I will ask the web team
- Judging from the ILRI publications site it seems they are downloading the PDFs so they can generate higher-quality thumbnails:
- They are apparently using this Drupal module to generate the thumbnails:
sites/all/modules/contrib/pdf_to_imagefield
- I see some excellent suggestions in this ImageMagick thread from 2012 that lead me to some nice thumbnails (default PDF density is 72, so supersample to 4X and then resize back to 25%) as well as this blog post:
$ convert -density 288 -filter lagrange -thumbnail 25% -background white -alpha remove -sampling-factor 1:1 -colorspace sRGB 10568-97925.pdf\[0\] 10568-97925.jpg
- Here I'm also explicitly setting the background to white and removing any alpha layers, but I could probably also just keep using
-flatten
like DSpace already does - I did some tests with a modified version of above that uses uses
-flatten
and drops the sampling-factor and colorspace, but bumps up the image size to 600px (default on CGSpace is currently 300):
$ convert -density 288 -filter lagrange -resize 25% -flatten 10568-97925.pdf\[0\] 10568-97925-d288-lagrange.pdf.jpg
$ convert -flatten 10568-97925.pdf\[0\] 10568-97925.pdf.jpg
$ convert -thumbnail x600 10568-97925-d288-lagrange.pdf.jpg 10568-97925-d288-lagrange-thumbnail.pdf.jpg
$ convert -thumbnail x600 10568-97925.pdf.jpg 10568-97925-thumbnail.pdf.jpg
- This emulate's DSpace's method of generating a high-quality image from the PDF and then creating a thumbnail
- I put together a proof of concept of this by adding the extra options to dspace-api's
ImageMagickThumbnailFilter.java
and it works - I need to run tests on a handful of PDFs to see if there are any side effects
- The file size is about double the old ones, but the quality is very good and the file size is nowhere near ilri.org's 400KiB PNG!
- Peter sent me the corrections and deletions for affiliations last night so I imported them into OpenRefine to work around the normal UTF-8 issue, ran them through csv-metadata-quality to make sure all Unicode values were normalized (NFC), then applied them on DSpace Test and CGSpace:
$ csv-metadata-quality -i ~/Downloads/2020-01-22-fix-1113-affiliations.csv -o /tmp/2020-01-22-fix-1113-affiliations.csv -u --exclude-fields 'dc.date.issued,dc.date.issued[],cg.contributor.affiliation'
$ ./fix-metadata-values.py -i /tmp/2020-01-22-fix-1113-affiliations.csv -db dspace -u dspace -p 'fuuu' -f cg.contributor.affiliation -m 211 -t correct
$ ./delete-metadata-values.py -i /tmp/2020-01-22-delete-36-affiliations.csv -db dspace -u dspace -p 'fuuu' -f cg.contributor.affiliation -m 211
2020-01-26
- Add "Gender" to controlled vocabulary for CRPs (#442)
- Deploy the changes on CGSpace and run all updates on the server and reboot it
- I had to restart the
tomcat7
service several times until all Solr statistics cores came up OK
- I had to restart the
- I spent a few hours writing a script (create-thumbnails) to compare the default DSpace thumbnails with the improved parameters above and actually when comparing them at size 600px I don't really notice much difference, other than the new ones have slightly crisper text
- So that was a waste of time, though I think our 300px thumbnails are a bit small now
- Another thread on the ImageMagick forum mentions that you need to set the density, then read the image, then set the density again:
$ convert -density 288 10568-97925.pdf\[0\] -density 72 -filter lagrange -flatten 10568-97925-density.jpg
- One thing worth mentioning was this syntax for extracting bits from JSON in bash using
jq
:
$ RESPONSE=$(curl -s 'https://dspacetest.cgiar.org/rest/handle/10568/103447?expand=bitstreams')
$ echo $RESPONSE | jq '.bitstreams[] | select(.bundleName=="ORIGINAL") | .retrieveLink'
"/bitstreams/172559/retrieve"
2020-01-27
- Bizu has been having problems when she logs into CGSpace, she can't see the community list on the front page
- This last happened for another user in [2016-11]({{< ref "2016-11.md" >}}), and it was related to the Tomcat
maxHttpHeaderSize
being too small because the user was in too many groups - I see that it is similar, with this message appearing in the DSpace log just after she logs in:
- This last happened for another user in [2016-11]({{< ref "2016-11.md" >}}), and it was related to the Tomcat
2020-01-27 06:02:23,681 ERROR org.dspace.app.xmlui.aspect.discovery.AbstractRecentSubmissionTransformer @ Caught SearchServiceException while retrieving recent submission for: home page
org.dspace.discovery.SearchServiceException: org.apache.solr.search.SyntaxError: Cannot parse 'read:(g0 OR e610 OR g0 OR g3 OR g5 OR g4102 OR g9 OR g4105 OR g10 OR g4107 OR g4108 OR g13 OR g4109 OR g14 OR g15 OR g16 OR g18 OR g20 OR g23 OR g24 OR g2072 OR g2074 OR g28 OR g2076 OR g29 OR g2078 OR g2080 OR g34 OR g2082 OR g2084 OR g38 OR g2086 OR g2088 OR g43 OR g2093 OR g2095 OR g2097 OR g50 OR g51 OR g2101 OR g2103 OR g62 OR g65 OR g77 OR g78 OR g2127 OR g2142 OR g2151 OR g2152 OR g2153 OR g2154 OR g2156 OR g2165 OR g2171 OR g2174 OR g2175 OR g129 OR g2178 OR g2182 OR g2186 OR g153 OR g155 OR g158 OR g166 OR g167 OR g168 OR g169 OR g2225 OR g179 OR g2227 OR g2229 OR g183 OR g2231 OR g184 OR g2233 OR g186 OR g2235 OR g2237 OR g191 OR g192 OR g193 OR g2242 OR g2244 OR g2246 OR g2250 OR g204 OR g205 OR g207 OR g208 OR g2262 OR g2265 OR g218 OR g2268 OR g222 OR g223 OR g2271 OR g2274 OR g2277 OR g230 OR g231 OR g2280 OR g2283 OR g238 OR g2286 OR g241 OR g2289 OR g244 OR g2292 OR g2295 OR g2298 OR g2301 OR g254 OR g255 OR g2305 OR g2308 OR g262 OR g2311 OR g265 OR g268 OR g269 OR g273 OR g276 OR g277 OR g279 OR g282 OR g292 OR g293 OR g296 OR g297 OR g301 OR g303 OR g305 OR g2353 OR g310 OR g311 OR g313 OR g321 OR g325 OR g328 OR g333 OR g334 OR g342 OR g343 OR g345 OR g348 OR g2409 [...] ': too many boolean clauses
- Now this appears to be a Solr limit of some kind ("too many boolean clauses")
- I changed the
maxBooleanClauses
for all Solr cores on DSpace Test from 1024 to 2048 and then she was able to see her communities... - I made a pull request and merged it to the
5_x-prod
branch and will deploy on CGSpace later tonight - I am curious if anyone on the dspace-tech mailing list has run into this, so I will try to send a message about this there when I get a chance
- I changed the
2020-01-28
- Generate a list of CIP subjects for Abenet:
dspace=# \COPY (SELECT DISTINCT text_value as "cg.subject.cip", count(*) FROM metadatavalue WHERE resource_type_id = 2 AND metadata_field_id = 127 GROUP BY text_value ORDER BY count DESC) to /tmp/2020-01-28-cip-subjects.csv WITH CSV HEADER;
COPY 77
- Start looking over the IITA records from earlier this month (IITA_201907_Jan13)
- Delete one duplicate, map one item from ILRI community
- The following items are duplicates or something (there is not enough metadata to tell for sure):
- This item doesn't exist in the journal, and Weed Science volume 55 was published in 2007, not 2003:
- All items using
cg.journal.title
instead ofdc.source
- Several items were missing ISSN despite having a journal title
- Many items were missing DOIs, abstracts, etc
- I did some metadata enrichment by searching for the items and copying relevant data from journal pages
- I asked Bosede to try to do the same for the rest of the journal articles
2020-01-29
- Normalize about 4,500 DOI, YouTube, and SlideShare links on CGSpace that are missing HTTPS or using old format:
UPDATE metadatavalue SET text_value = regexp_replace(text_value, 'http://www.doi.org', 'https://doi.org') WHERE resource_type_id = 2 AND metadata_field_id = 220 AND text_value LIKE 'http://www.doi.org%';
UPDATE metadatavalue SET text_value = regexp_replace(text_value, 'http://doi.org', 'https://doi.org') WHERE resource_type_id = 2 AND metadata_field_id = 220 AND text_value LIKE 'http://doi.org%';
UPDATE metadatavalue SET text_value = regexp_replace(text_value, 'http://dx.doi.org', 'https://doi.org') WHERE resource_type_id = 2 AND metadata_field_id = 220 AND text_value LIKE 'http://dx.doi.org%';
UPDATE metadatavalue SET text_value = regexp_replace(text_value, 'https://dx.doi.org', 'https://doi.org') WHERE resource_type_id = 2 AND metadata_field_id = 220 AND text_value LIKE 'https://dx.doi.org%';
UPDATE metadatavalue SET text_value = regexp_replace(text_value, 'http://www.youtube.com', 'https://www.youtube.com') WHERE resource_type_id = 2 AND metadata_field_id = 219 AND text_value LIKE 'http://www.youtube.com%';
UPDATE metadatavalue SET text_value = regexp_replace(text_value, 'http://www.slideshare.net', 'https://www.slideshare.net') WHERE resource_type_id = 2 AND metadata_field_id = 219 AND text_value LIKE 'http://www.slideshare.net%';
- I exported a list of all of our ISSNs with item IDs so that I could fix them in OpenRefine and submit them with multi-value separators to DSpace metadata import:
dspace=# \COPY (SELECT resource_id as "id", text_value as "dc.identifier.issn" FROM metadatavalue WHERE resource_type_id = 2 AND metadata_field_id = 21) to /tmp/2020-01-29-issn.csv WITH CSV HEADER;
COPY 23339
- Then, after spending two hours correcting 1,000 ISSNs I realized that I need to normalize the
text_lang
fields in the database first or else these will all look like changes due to the "en_US" and NULL, etc (for both ISSN and ISBN):
dspace=# UPDATE metadatavalue SET text_lang='en_US' WHERE resource_type_id = 2 AND metadata_field_id IN (20,21);
UPDATE 30454
- Then I realized that my initial PostgreSQL query wasn't so genius because if a field already has multiple values it will appear on separate lines with the same ID, so when
dspace metadata-import
sees it, the change will be removed and added, or added and removed, depending on the order it is seen! - A better course of action is to select the distinct ones and then correct them using
fix-metadata-values.py
...
dspace=# \COPY (SELECT DISTINCT text_value as "dc.identifier.issn[en_US]", count(*) FROM metadatavalue WHERE resource_type_id = 2 AND metadata_field_id = 21 GROUP BY text_value ORDER BY count DESC) to /tmp/2020-01-29-issn-distinct.csv WITH CSV HEADER;
COPY 2900
- I re-applied all my corrections, filtering out things like multi-value separators and values that are actually ISBNs so I can fix them later
- Then I applied 181 fixes for ISSNs using
fix-metadata-values.py
on DSpace Test and CGSpace (after testing locally):
$ ./fix-metadata-values.py -i /tmp/2020-01-29-ISSNs-Distinct.csv -db dspace -u dspace -p 'fuuu' -f 'dc.identifier.issn[en_US]' -m 21 -t correct -d
2020-01-30
- About to start working on the DSpace 6 port and I'm looking at commits that are in the not-yet-tagged DSpace 6.4:
- [DS-4342] improve the performance of the collections/collection_id/items REST endpoint:
- c2e6719fa763e291b81b2d61da2f8c758fe38ff3
- [DS-4136] Improve OAI import performance for a large install:
- 3f81daf3d89b17ff4d08783ee9899e5a745851dc
- 37004bbcf4ca3ef2a74ebc6e4774cb605884864e
- DS-4110: fix issue in legacy id cleanup of stats records
- 3752247d6a4b83ee809cc9b197f34a8ff50b9e74
- e6004e57f0f2f3ce5f433647fe8a467b0176836b
- 2fb3751c9adfe7311c6df43dbd51a41479480f5e
- Fix DS-4066 by update all IDs to string type in schema:
- f15cb33ab4272a3970572e608810de3076d541a3
- DS-3914: Fix community defiliation:
- 19cc9719879cf69019acad72ee13915a4128e859
- b86a7b8d66608ee2bec67fb69b37e27c9a620aa3
- [DS-3849] Default ID 'order by' clause for other 'get items' queries:
- 7b888fa558e5792cd780d1d6a7f75564f4da3bf9
- 8d1aa33f7b9ea5a623e1ed13f139695671c598d4
- [DS-3664] ImageMagick: Only execute "identify" on first page:
- 33ba419f3560639bff8ea002cdfc38345c0fea8d
- DS-3658 Configure ReindexerThread disable reindex
- 1d2f10592ac2d86f28044749f34ac05347ea0e0a
- 05959ef315d2a1670e4b59eee4db21f93ba238fa
- 7253095b623069d7ef0a1a13cc5a21385d0878c9
- [DS-3602] 6x Port: Incremental Update of Legacy Id fields in Solr Statistics:
- 184f2b2153479045fba6239342c63e7f8564b8b6
- Dspace 6 ds 3545 mirage2: custom sitemap.xmap is ignored
- 71c68f2f54dead69329298810d0fecdf76b59c09
- [DS-4342] improve the performance of the collections/collection_id/items REST endpoint:
- It's annoying that we have to target DSpace 6.3... I think I should totally cherry-pick these when I'm done
- For now I just created a new DSpace repository and checked out the
dspace-6.3
tag and started diffing and copying changes over from our 5.8 repository - There are some things I need to remember to check:
search.index
settings in DSpace 5's dspace.cfg (dunno where they are now)thumbnail-fallback-files.xml
- The code currently lives in the
6_x-dev
branch