diff --git a/content/post/2017-08.md b/content/post/2017-08.md index ecea52895..d030c9442 100644 --- a/content/post/2017-08.md +++ b/content/post/2017-08.md @@ -18,5 +18,11 @@ tags = ["Notes"] - Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): https://jira.duraspace.org/browse/DS-2962 - It turns out that we're already adding the `X-Robots-Tag "none"` HTTP header, but this only forbids the search engine from _indexing_ the page, not crawling it! - Also, the bot has to successfully browse the page first so it can receive the HTTP header... +- We might actually have to _block_ these requests with HTTP 403 depending on the user agent +- Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415 +- This was due to newline characters in the `dc.description.abstract` column, which caused OpenRefine to choke when exporting the CSV +- I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using `g/^$/d` +- Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet + diff --git a/public/2017-08/index.html b/public/2017-08/index.html index 56be6ff1f..9d9afd795 100644 --- a/public/2017-08/index.html +++ b/public/2017-08/index.html @@ -23,6 +23,11 @@ The robots.txt only blocks the top-level /discover and /browse URLs… we w Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): https://jira.duraspace.org/browse/DS-2962 It turns out that we’re already adding the X-Robots-Tag "none" HTTP header, but this only forbids the search engine from indexing the page, not crawling it! Also, the bot has to successfully browse the page first so it can receive the HTTP header… +We might actually have to block these requests with HTTP 403 depending on the user agent +Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415 +This was due to newline characters in the dc.description.abstract column, which caused OpenRefine to choke when exporting the CSV +I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using g/^$/d +Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet " /> @@ -32,7 +37,7 @@ Also, the bot has to successfully browse the page first so it can receive the HT - + @@ -69,6 +74,11 @@ The robots.txt only blocks the top-level /discover and /browse URLs… we w Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): https://jira.duraspace.org/browse/DS-2962 It turns out that we’re already adding the X-Robots-Tag "none" HTTP header, but this only forbids the search engine from indexing the page, not crawling it! Also, the bot has to successfully browse the page first so it can receive the HTTP header… +We might actually have to block these requests with HTTP 403 depending on the user agent +Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415 +This was due to newline characters in the dc.description.abstract column, which caused OpenRefine to choke when exporting the CSV +I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using g/^$/d +Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet "/> @@ -83,9 +93,9 @@ Also, the bot has to successfully browse the page first so it can receive the HT "@type": "BlogPosting", "headline": "August, 2017", "url": "https://alanorth.github.io/cgspace-notes/2017-08/", - "wordCount": "166", + "wordCount": "262", "datePublished": "2017-08-01T11:51:52+03:00", - "dateModified": "2017-08-01T11:57:37+03:00", + "dateModified": "2017-08-01T12:03:37+03:00", "author": { "@type": "Person", "name": "Alan Orth" @@ -165,6 +175,11 @@ Also, the bot has to successfully browse the page first so it can receive the HT
  • Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): https://jira.duraspace.org/browse/DS-2962
  • It turns out that we’re already adding the X-Robots-Tag "none" HTTP header, but this only forbids the search engine from indexing the page, not crawling it!
  • Also, the bot has to successfully browse the page first so it can receive the HTTP header…
  • +
  • We might actually have to block these requests with HTTP 403 depending on the user agent
  • +
  • Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415
  • +
  • This was due to newline characters in the dc.description.abstract column, which caused OpenRefine to choke when exporting the CSV
  • +
  • I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using g/^$/d
  • +
  • Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet
  • diff --git a/public/index.html b/public/index.html index e471e24ef..863f4a969 100644 --- a/public/index.html +++ b/public/index.html @@ -121,6 +121,11 @@
  • Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): https://jira.duraspace.org/browse/DS-2962
  • It turns out that we’re already adding the X-Robots-Tag "none" HTTP header, but this only forbids the search engine from indexing the page, not crawling it!
  • Also, the bot has to successfully browse the page first so it can receive the HTTP header…
  • +
  • We might actually have to block these requests with HTTP 403 depending on the user agent
  • +
  • Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415
  • +
  • This was due to newline characters in the dc.description.abstract column, which caused OpenRefine to choke when exporting the CSV
  • +
  • I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using g/^$/d
  • +
  • Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet
  • diff --git a/public/index.xml b/public/index.xml index d62071a91..4cb1fd5b9 100644 --- a/public/index.xml +++ b/public/index.xml @@ -34,6 +34,11 @@ <li>Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): <a href="https://jira.duraspace.org/browse/DS-2962">https://jira.duraspace.org/browse/DS-2962</a></li> <li>It turns out that we&rsquo;re already adding the <code>X-Robots-Tag &quot;none&quot;</code> HTTP header, but this only forbids the search engine from <em>indexing</em> the page, not crawling it!</li> <li>Also, the bot has to successfully browse the page first so it can receive the HTTP header&hellip;</li> +<li>We might actually have to <em>block</em> these requests with HTTP 403 depending on the user agent</li> +<li>Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415</li> +<li>This was due to newline characters in the <code>dc.description.abstract</code> column, which caused OpenRefine to choke when exporting the CSV</li> +<li>I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using <code>g/^$/d</code></li> +<li>Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet</li> </ul> <p></p> diff --git a/public/post/index.html b/public/post/index.html index 40292476c..4900c2f33 100644 --- a/public/post/index.html +++ b/public/post/index.html @@ -121,6 +121,11 @@
  • Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): https://jira.duraspace.org/browse/DS-2962
  • It turns out that we’re already adding the X-Robots-Tag "none" HTTP header, but this only forbids the search engine from indexing the page, not crawling it!
  • Also, the bot has to successfully browse the page first so it can receive the HTTP header…
  • +
  • We might actually have to block these requests with HTTP 403 depending on the user agent
  • +
  • Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415
  • +
  • This was due to newline characters in the dc.description.abstract column, which caused OpenRefine to choke when exporting the CSV
  • +
  • I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using g/^$/d
  • +
  • Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet
  • diff --git a/public/post/index.xml b/public/post/index.xml index e356464be..9565ed3d5 100644 --- a/public/post/index.xml +++ b/public/post/index.xml @@ -34,6 +34,11 @@ <li>Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): <a href="https://jira.duraspace.org/browse/DS-2962">https://jira.duraspace.org/browse/DS-2962</a></li> <li>It turns out that we&rsquo;re already adding the <code>X-Robots-Tag &quot;none&quot;</code> HTTP header, but this only forbids the search engine from <em>indexing</em> the page, not crawling it!</li> <li>Also, the bot has to successfully browse the page first so it can receive the HTTP header&hellip;</li> +<li>We might actually have to <em>block</em> these requests with HTTP 403 depending on the user agent</li> +<li>Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415</li> +<li>This was due to newline characters in the <code>dc.description.abstract</code> column, which caused OpenRefine to choke when exporting the CSV</li> +<li>I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using <code>g/^$/d</code></li> +<li>Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet</li> </ul> <p></p> diff --git a/public/sitemap.xml b/public/sitemap.xml index 9e317c57e..a4cadeb49 100644 --- a/public/sitemap.xml +++ b/public/sitemap.xml @@ -4,7 +4,7 @@ https://alanorth.github.io/cgspace-notes/2017-08/ - 2017-08-01T11:57:37+03:00 + 2017-08-01T12:03:37+03:00 @@ -114,7 +114,7 @@ https://alanorth.github.io/cgspace-notes/ - 2017-08-01T11:57:37+03:00 + 2017-08-01T12:03:37+03:00 0 @@ -125,19 +125,19 @@ https://alanorth.github.io/cgspace-notes/tags/notes/ - 2017-08-01T11:57:37+03:00 + 2017-08-01T12:03:37+03:00 0 https://alanorth.github.io/cgspace-notes/post/ - 2017-08-01T11:57:37+03:00 + 2017-08-01T12:03:37+03:00 0 https://alanorth.github.io/cgspace-notes/tags/ - 2017-08-01T11:57:37+03:00 + 2017-08-01T12:03:37+03:00 0 diff --git a/public/tags/notes/index.html b/public/tags/notes/index.html index f6fa181a4..7f74aa68a 100644 --- a/public/tags/notes/index.html +++ b/public/tags/notes/index.html @@ -121,6 +121,11 @@
  • Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): https://jira.duraspace.org/browse/DS-2962
  • It turns out that we’re already adding the X-Robots-Tag "none" HTTP header, but this only forbids the search engine from indexing the page, not crawling it!
  • Also, the bot has to successfully browse the page first so it can receive the HTTP header…
  • +
  • We might actually have to block these requests with HTTP 403 depending on the user agent
  • +
  • Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415
  • +
  • This was due to newline characters in the dc.description.abstract column, which caused OpenRefine to choke when exporting the CSV
  • +
  • I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using g/^$/d
  • +
  • Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet
  • diff --git a/public/tags/notes/index.xml b/public/tags/notes/index.xml index 3551bbd75..ad82796fb 100644 --- a/public/tags/notes/index.xml +++ b/public/tags/notes/index.xml @@ -34,6 +34,11 @@ <li>Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): <a href="https://jira.duraspace.org/browse/DS-2962">https://jira.duraspace.org/browse/DS-2962</a></li> <li>It turns out that we&rsquo;re already adding the <code>X-Robots-Tag &quot;none&quot;</code> HTTP header, but this only forbids the search engine from <em>indexing</em> the page, not crawling it!</li> <li>Also, the bot has to successfully browse the page first so it can receive the HTTP header&hellip;</li> +<li>We might actually have to <em>block</em> these requests with HTTP 403 depending on the user agent</li> +<li>Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415</li> +<li>This was due to newline characters in the <code>dc.description.abstract</code> column, which caused OpenRefine to choke when exporting the CSV</li> +<li>I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using <code>g/^$/d</code></li> +<li>Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet</li> </ul> <p></p>