<li>Spend some time looking at duplicate DOIs again…</li>
</ul>
<h2id="2024-05-07">2024-05-07</h2>
<ul>
<li>Discuss RSS feeds and OpenSearch with IWMI
<ul>
<li>It seems our OpenSearch feed settings are using the defaults, so I need to copy some of those over from our old DSpace 6 branch</li>
</ul>
</li>
<li>I saw a patch for an interesting issue on DSpace GitHub: <ahref="https://github.com/DSpace/DSpace/issues/9544">Error submitting or deleting items - URI too long when user is in a large number of groups</a>
<ul>
<li>I hadn’t realized it, but we have lots of those errors:</li>
</ul>
</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>$ zstdgrep -a <spanstyle="color:#e6db74">'URI Too Long'</span> log/dspace.log-2024-04-* | wc -l
<li>Spend some time looking at duplicate DOIs again…</li>
</ul>
<h2id="2024-05-08">2024-05-08</h2>
<ul>
<li>Spend some time looking at duplicate DOIs again…
<ul>
<li>I finally finished looking at the duplicate DOIs for journal articles</li>
<li>I updated the list of handle redirects and there are 386 of them!</li>
</ul>
</li>
</ul>
<h2id="2024-05-09">2024-05-09</h2>
<ul>
<li>Spend some time working on the IFPRI 2020–2021 batch
<ul>
<li>I started by checking for exact duplicates (1.0 similarity) using DOI, type, and issue date</li>
</ul>
</li>
</ul>
<h2id="2024-05-12">2024-05-12</h2>
<ul>
<li>I couldn’t figure out how to do a complex join on withdrawn items along with their metadata, so I pull out a few like titles, handles, and provenance separately:</li>
</ul>
<pretabindex="0"><codeclass="language-psql"data-lang="psql">dspace=# \COPY (SELECT i.uuid, m.text_value AS uri FROM item i JOIN metadatavalue m ON i.uuid = m.dspace_object_id WHERE withdrawn AND m.metadata_field_id=25) TO /tmp/withdrawn-handles.csv CSV HEADER;
dspace=# \COPY (SELECT i.uuid, m.text_value AS title FROM item i JOIN metadatavalue m ON i.uuid = m.dspace_object_id WHERE withdrawn AND m.metadata_field_id=64) TO /tmp/withdrawn-titles.csv CSV HEADER;
dspace=# \COPY (SELECT i.uuid, m.text_value AS submitted_by FROM item i JOIN metadatavalue m ON i.uuid = m.dspace_object_id WHERE withdrawn AND m.metadata_field_id=28 AND m.text_value LIKE 'Submitted by%') TO /tmp/withdrawn-submitted-by.csv CSV HEADER;
<li>I discovered the <code>/server/api/pid/find</code> endpoint today, which is much more direct and manageable than the <code>/server/api/discover/search/objects?query=</code> endpoint when trying to get metadata for a Handle (item, collection, or community)
<ul>
<li>The “pid” stands for permanent identifiers apparently, and we can use it like this:</li>
<li><ahref="https://dspace7test.ilri.org/server/api/discover/search/objects?query=dcterms.issued_dt%3A%5B2024-01-01T00%3A00%3A00Z%20TO%20%2A%5D">https://dspace7test.ilri.org/server/api/discover/search/objects?query=dcterms.issued_dt%3A%5B2024-01-01T00%3A00%3A00Z%20TO%20%2A%5D</a> — note the Lucene search syntax is URL encoded version of <code>:[2024-01-01T00:00:00Z TO *]</code></li>
</ul>
<p>Both of them return the same number of results and seem identitical as far as I can see, but the second one uses Solr date indexes and requires the full Lucene datetime and range syntax</p>
<p>I wrote a new version of the <code>check_duplicates.py</code> script to help identify duplicates with different types</p>
<ul>
<li>Initially I called it <code>check_duplicates_fast.py</code> but it’s actually not faster</li>
<li>I need to find a way to deal with duplicates from IFPRI’s repository because there are some mismatched types…</li>
</ul>
<h2id="2024-05-20">2024-05-20</h2>
<p>Continue working through alternative duplicate matching for IFPRI</p>
<ul>
<li>Their item types are sometimes different than ours…</li>
<li>One thing I think I can say for sure is that the default similarity factor in my script is 0.6, and I rarely see legitimate duplicates with such similarity so I might increase this to 0.7 to reduce the number of items I have to check</li>
<li>Also, the difference in issue dates is currently 365, but I should reduce that a bit, perhaps to 270 days (9 months)</li>
</span></span><spanstyle="display:flex;"><span>url <spanstyle="color:#f92672">=</span><spanstyle="color:#e6db74">"https://api.crossref.org/works/"</span><spanstyle="color:#f92672">+</span> doi
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>forEach(value.parseHtml().select("abstract p"), i, i.htmlText()).join("\r\n\r\n")
</span></span></code></pre></div><p>For each paragraph inside an abstract, get the inner text and join them as one string separated by two newlines…</p>
<ul>
<li>Ah, some articles have multiple abstracts, for example: <ahref="https://journals.plos.org/plosone/article/file?id=https://doi.org/10.1371/journal.pntd.0001859&type=manuscript">https://journals.plos.org/plosone/article/file?id=https://doi.org/10.1371/journal.pntd.0001859&type=manuscript</a></li>
<li>I need to select the abstract that does <strong>not</strong> have any attributes (using <ahref="https://jsoup.org/apidocs/org/jsoup/select/Selector.html">Jsoup selector syntax</a>)</li>
</ul>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>forEach(value.parseXml().select("abstract:not([*]) p"), i, i.xmlText()).join("\r\n\r\n")
</span></span></code></pre></div><p>Testing <code>xsv</code> (Rust) versus <code>csvkit</code> (Python) to filter all items with DOIs from a DSpace dump with 118,000 items:</p>
<divclass="highlight"><pretabindex="0"style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><codeclass="language-console"data-lang="console"><spanstyle="display:flex;"><span>$ time xsv search -s doi <spanstyle="color:#e6db74">'doi\.org'</span> /tmp/cgspace-minimal.csv | xsv <spanstyle="color:#66d9ef">select</span> doi | xsv count
</span></span><spanstyle="display:flex;"><span>xsv search -s doi 'doi\.org' /tmp/cgspace-minimal.csv 0.06s user 0.03s system 98% cpu 0.091 total
</span></span><spanstyle="display:flex;"><span>xsv select doi 0.02s user 0.02s system 40% cpu 0.091 total
</span></span><spanstyle="display:flex;"><span>xsv count 0.01s user 0.00s system 9% cpu 0.090 total
</span></span><spanstyle="display:flex;"><span>$ time csvgrep -c doi -m <spanstyle="color:#e6db74">'doi.org'</span> /tmp/cgspace-minimal.csv | csvcut -c doi | csvstat --count
</span></span><spanstyle="display:flex;"><span>csvgrep -c doi -m 'doi.org' /tmp/cgspace-minimal.csv 1.15s user 0.06s system 95% cpu 1.273 total
</span></span><spanstyle="display:flex;"><span>csvcut -c doi 0.42s user 0.05s system 36% cpu 1.283 total
</span></span><spanstyle="display:flex;"><span>csvstat --count 0.20s user 0.03s system 18% cpu 1.298 total
<li>I’m thinking of increasing the frequency of thumbnail generation on CGSpace
<ul>
<li>Currently the <code>dspace filter-media</code> script runs once at 3AM for all media types and seems to take ~10 minutes to run for all 118,000 items…</li>
<li>I think I will make the thumbnailer run explicitly more often using <code>-p "ImageMagick PDF Thumbnail"</code></li>