mirror of
https://github.com/alanorth/cgspace-notes.git
synced 2024-11-27 00:48:19 +01:00
362 lines
18 KiB
HTML
362 lines
18 KiB
HTML
<!DOCTYPE html>
|
|
<html lang="en">
|
|
|
|
<head>
|
|
<meta charset="utf-8">
|
|
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
|
|
|
|
<meta property="og:title" content="August, 2017" />
|
|
<meta property="og:description" content="2017-08-01
|
|
|
|
|
|
Linode sent an alert that CGSpace (linode18) was using 350% CPU for the past two hours
|
|
I looked in the Activity pane of the Admin Control Panel and it seems that Google, Baidu, Yahoo, and Bing are all crawling with massive numbers of bots concurrently (~100 total, mostly Baidu and Google)
|
|
The good thing is that, according to dspace.log.2017-08-01, they are all using the same Tomcat session
|
|
This means our Tomcat Crawler Session Valve is working
|
|
But many of the bots are browsing dynamic URLs like:
|
|
|
|
|
|
/handle/10568/3353/discover
|
|
/handle/10568/16510/browse
|
|
|
|
The robots.txt only blocks the top-level /discover and /browse URLs… we will need to find a way to forbid them from accessing these!
|
|
Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): https://jira.duraspace.org/browse/DS-2962
|
|
It turns out that we’re already adding the X-Robots-Tag "none" HTTP header, but this only forbids the search engine from indexing the page, not crawling it!
|
|
Also, the bot has to successfully browse the page first so it can receive the HTTP header…
|
|
We might actually have to block these requests with HTTP 403 depending on the user agent
|
|
Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415
|
|
This was due to newline characters in the dc.description.abstract column, which caused OpenRefine to choke when exporting the CSV
|
|
I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using g/^$/d
|
|
Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet
|
|
|
|
|
|
" />
|
|
<meta property="og:type" content="article" />
|
|
<meta property="og:url" content="https://alanorth.github.io/cgspace-notes/2017-08/" />
|
|
|
|
|
|
|
|
<meta property="article:published_time" content="2017-08-01T11:51:52+03:00"/>
|
|
<meta property="article:modified_time" content="2017-08-12T09:29:02+03:00"/>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
<meta name="twitter:card" content="summary"/><meta name="twitter:title" content="August, 2017"/>
|
|
<meta name="twitter:description" content="2017-08-01
|
|
|
|
|
|
Linode sent an alert that CGSpace (linode18) was using 350% CPU for the past two hours
|
|
I looked in the Activity pane of the Admin Control Panel and it seems that Google, Baidu, Yahoo, and Bing are all crawling with massive numbers of bots concurrently (~100 total, mostly Baidu and Google)
|
|
The good thing is that, according to dspace.log.2017-08-01, they are all using the same Tomcat session
|
|
This means our Tomcat Crawler Session Valve is working
|
|
But many of the bots are browsing dynamic URLs like:
|
|
|
|
|
|
/handle/10568/3353/discover
|
|
/handle/10568/16510/browse
|
|
|
|
The robots.txt only blocks the top-level /discover and /browse URLs… we will need to find a way to forbid them from accessing these!
|
|
Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): https://jira.duraspace.org/browse/DS-2962
|
|
It turns out that we’re already adding the X-Robots-Tag "none" HTTP header, but this only forbids the search engine from indexing the page, not crawling it!
|
|
Also, the bot has to successfully browse the page first so it can receive the HTTP header…
|
|
We might actually have to block these requests with HTTP 403 depending on the user agent
|
|
Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415
|
|
This was due to newline characters in the dc.description.abstract column, which caused OpenRefine to choke when exporting the CSV
|
|
I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using g/^$/d
|
|
Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet
|
|
|
|
|
|
"/>
|
|
<meta name="generator" content="Hugo 0.26" />
|
|
|
|
|
|
|
|
<script type="application/ld+json">
|
|
{
|
|
"@context": "http://schema.org",
|
|
"@type": "BlogPosting",
|
|
"headline": "August, 2017",
|
|
"url": "https://alanorth.github.io/cgspace-notes/2017-08/",
|
|
"wordCount": "1327",
|
|
"datePublished": "2017-08-01T11:51:52+03:00",
|
|
"dateModified": "2017-08-12T09:29:02+03:00",
|
|
"author": {
|
|
"@type": "Person",
|
|
"name": "Alan Orth"
|
|
},
|
|
"keywords": "Notes"
|
|
}
|
|
</script>
|
|
|
|
|
|
|
|
<link rel="canonical" href="https://alanorth.github.io/cgspace-notes/2017-08/">
|
|
|
|
<title>August, 2017 | CGSpace Notes</title>
|
|
|
|
<!-- combined, minified CSS -->
|
|
<link href="https://alanorth.github.io/cgspace-notes/css/style.css" rel="stylesheet" integrity="sha384-j3n8sYdzztDYtVc80KiiuOXoCg5Bjz0zYyLGzDMW8RbfA0u5djbF0GO3bVOPoLyN" crossorigin="anonymous">
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
</head>
|
|
|
|
<body>
|
|
|
|
<div class="blog-masthead">
|
|
<div class="container">
|
|
<nav class="nav blog-nav">
|
|
<a class="nav-link " href="https://alanorth.github.io/cgspace-notes/">Home</a>
|
|
|
|
|
|
</nav>
|
|
</div>
|
|
</div>
|
|
|
|
<header class="blog-header">
|
|
<div class="container">
|
|
<h1 class="blog-title"><a href="https://alanorth.github.io/cgspace-notes/" rel="home">CGSpace Notes</a></h1>
|
|
<p class="lead blog-description">Documenting day-to-day work on the <a href="https://cgspace.cgiar.org">CGSpace</a> repository.</p>
|
|
</div>
|
|
</header>
|
|
|
|
<div class="container">
|
|
<div class="row">
|
|
<div class="col-sm-8 blog-main">
|
|
|
|
|
|
|
|
|
|
<article class="blog-post">
|
|
<header>
|
|
<h2 class="blog-post-title"><a href="https://alanorth.github.io/cgspace-notes/2017-08/">August, 2017</a></h2>
|
|
<p class="blog-post-meta"><time datetime="2017-08-01T11:51:52+03:00">Tue Aug 01, 2017</time> by Alan Orth in
|
|
|
|
<i class="fa fa-tag" aria-hidden="true"></i> <a href="/cgspace-notes/tags/notes" rel="tag">Notes</a>
|
|
|
|
</p>
|
|
</header>
|
|
<h2 id="2017-08-01">2017-08-01</h2>
|
|
|
|
<ul>
|
|
<li>Linode sent an alert that CGSpace (linode18) was using 350% CPU for the past two hours</li>
|
|
<li>I looked in the Activity pane of the Admin Control Panel and it seems that Google, Baidu, Yahoo, and Bing are all crawling with massive numbers of bots concurrently (~100 total, mostly Baidu and Google)</li>
|
|
<li>The good thing is that, according to <code>dspace.log.2017-08-01</code>, they are all using the same Tomcat session</li>
|
|
<li>This means our Tomcat Crawler Session Valve is working</li>
|
|
<li>But many of the bots are browsing dynamic URLs like:
|
|
|
|
<ul>
|
|
<li>/handle/10568/3353/discover</li>
|
|
<li>/handle/10568/16510/browse</li>
|
|
</ul></li>
|
|
<li>The <code>robots.txt</code> only blocks the top-level <code>/discover</code> and <code>/browse</code> URLs… we will need to find a way to forbid them from accessing these!</li>
|
|
<li>Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): <a href="https://jira.duraspace.org/browse/DS-2962">https://jira.duraspace.org/browse/DS-2962</a></li>
|
|
<li>It turns out that we’re already adding the <code>X-Robots-Tag "none"</code> HTTP header, but this only forbids the search engine from <em>indexing</em> the page, not crawling it!</li>
|
|
<li>Also, the bot has to successfully browse the page first so it can receive the HTTP header…</li>
|
|
<li>We might actually have to <em>block</em> these requests with HTTP 403 depending on the user agent</li>
|
|
<li>Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415</li>
|
|
<li>This was due to newline characters in the <code>dc.description.abstract</code> column, which caused OpenRefine to choke when exporting the CSV</li>
|
|
<li>I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using <code>g/^$/d</code></li>
|
|
<li>Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet</li>
|
|
</ul>
|
|
|
|
<p></p>
|
|
|
|
<h2 id="2017-08-02">2017-08-02</h2>
|
|
|
|
<ul>
|
|
<li>Magdalena from CCAFS asked if there was a way to get the top ten items published in 2016 (note: not the top items in 2016!)</li>
|
|
<li>I think Atmire’s Content and Usage Analysis module should be able to do this but I will have to look at the configuration and maybe email Atmire if I can’t figure it out</li>
|
|
<li>I had a look at the moduel configuration and couldn’t figure out a way to do this, so I <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-tickets">opened a ticket on the Atmire tracker</a></li>
|
|
<li>Atmire responded about the <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=500">missing workflow statistics issue</a> a few weeks ago but I didn’t see it for some reason</li>
|
|
<li>They said they added a publication and saw the workflow stat for the user, so I should try again and let them know</li>
|
|
</ul>
|
|
|
|
<h2 id="2017-08-05">2017-08-05</h2>
|
|
|
|
<ul>
|
|
<li>Usman from CIFOR emailed to ask about the status of our OAI tests for harvesting their DSpace repository</li>
|
|
<li>I told him that the OAI appears to not be harvesting properly after the first sync, and that the control panel shows an “Internal error” for that collection:</li>
|
|
</ul>
|
|
|
|
<p><img src="/cgspace-notes/2017/08/cifor-oai-harvesting.png" alt="CIFOR OAI harvesting" /></p>
|
|
|
|
<ul>
|
|
<li>I don’t see anything related in our logs, so I asked him to check for our server’s IP in their logs</li>
|
|
<li>Also, in the mean time I stopped the harvesting process, reset the status, and restarted the process via the Admin control panel (note: I didn’t reset the collection, just the harvester status!)</li>
|
|
</ul>
|
|
|
|
<h2 id="2017-08-07">2017-08-07</h2>
|
|
|
|
<ul>
|
|
<li>Apply Abenet’s corrections for the CGIAR Library’s Consortium subcommunity (697 records)</li>
|
|
<li>I had to fix a few small things, like moving the <code>dc.title</code> column away from the beginning of the row, delete blank spaces in the abstract in vim using <code>:g/^$/d</code>, add the <code>dc.subject[en_US]</code> column back, as she had deleted it and DSpace didn’t detect the changes made there (we needed to blank the values instead)</li>
|
|
</ul>
|
|
|
|
<h2 id="2017-08-08">2017-08-08</h2>
|
|
|
|
<ul>
|
|
<li>Apply Abenet’s corrections for the CGIAR Library’s historic archive subcommunity (2415 records)</li>
|
|
<li>I had to add the <code>dc.subject[en_US]</code> column back with blank values so that DSpace could detect the changes</li>
|
|
<li>I applied the changes in 500 item batches</li>
|
|
</ul>
|
|
|
|
<h2 id="2017-08-09">2017-08-09</h2>
|
|
|
|
<ul>
|
|
<li>Run system updates on DSpace Test and reboot server</li>
|
|
<li>Help ICARDA upgrade their MELSpace to DSpace 5.7 using the <a href="https://github.com/alanorth/docker-dspace">docker-dspace</a> container
|
|
|
|
<ul>
|
|
<li>We had to import the PostgreSQL dump to the PostgreSQL container using: <code>pg_restore -U postgres -d dspace blah.dump</code></li>
|
|
<li>Otherwise, when using <code>-O</code> it messes up the permissions on the schema and DSpace can’t read it</li>
|
|
</ul></li>
|
|
</ul>
|
|
|
|
<h2 id="2017-08-10">2017-08-10</h2>
|
|
|
|
<ul>
|
|
<li>Apply last updates to the CGIAR Library’s Fund community (812 items)</li>
|
|
<li>Had to do some quality checks and column renames before importing, as either Sisay or Abenet renamed a few columns and the metadata importer wanted to remove/add new metadata for title, abstract, etc.</li>
|
|
<li>Also I applied the HTML entities unescape transform on the abstract column in Open Refine</li>
|
|
<li>I need to get an author list from the database for only the CGIAR Library community to send to Peter</li>
|
|
<li>It turns out that I had already used this SQL query in <a href="/cgspace-notes/2017-05">May, 2017</a> to get the authors from CGIAR Library:</li>
|
|
</ul>
|
|
|
|
<pre><code>dspace#= \copy (select distinct text_value, count(*) from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = 'contributor' and qualifier = 'author') AND resource_type_id = 2 AND resource_id IN (select item_id from collection2item where collection_id IN (select resource_id from handle where handle in ('10568/93761', '10947/1', '10947/10', '10947/11', '10947/12', '10947/13', '10947/14', '10947/15', '10947/16', '10947/17', '10947/18', '10947/19', '10947/2', '10947/20', '10947/21', '10947/22', '10947/23', '10947/24', '10947/25', '10947/2512', '10947/2515', '10947/2516', '10947/2517', '10947/2518', '10947/2519', '10947/2520', '10947/2521', '10947/2522', '10947/2523', '10947/2524', '10947/2525', '10947/2526', '10947/2527', '10947/2528', '10947/2529', '10947/2530', '10947/2531', '10947/2532', '10947/2533', '10947/2534', '10947/2535', '10947/2536', '10947/2537', '10947/2538', '10947/2539', '10947/2540', '10947/2541', '10947/2589', '10947/26', '10947/2631', '10947/27', '10947/2708', '10947/2776', '10947/2782', '10947/2784', '10947/2786', '10947/2790', '10947/28', '10947/2805', '10947/2836', '10947/2871', '10947/2878', '10947/29', '10947/2900', '10947/2919', '10947/3', '10947/30', '10947/31', '10947/32', '10947/33', '10947/34', '10947/3457', '10947/35', '10947/36', '10947/37', '10947/38', '10947/39', '10947/4', '10947/40', '10947/4052', '10947/4054', '10947/4056', '10947/4068', '10947/41', '10947/42', '10947/43', '10947/4368', '10947/44', '10947/4467', '10947/45', '10947/4508', '10947/4509', '10947/4510', '10947/4573', '10947/46', '10947/4635', '10947/4636', '10947/4637', '10947/4638', '10947/4639', '10947/4651', '10947/4657', '10947/47', '10947/48', '10947/49', '10947/5', '10947/50', '10947/51', '10947/5308', '10947/5322', '10947/5324', '10947/5326', '10947/6', '10947/7', '10947/8', '10947/9'))) group by text_value order by count desc) to /tmp/cgiar-library-authors.csv with csv;
|
|
</code></pre>
|
|
|
|
<ul>
|
|
<li>Meeting with Peter and CGSpace team
|
|
|
|
<ul>
|
|
<li>Alan to follow up with ICARDA about depositing in CGSpace, we want ICARD and Drylands legacy content but not duplicates</li>
|
|
<li>Alan to follow up on dc.rights, where are we?</li>
|
|
<li>Alan to follow up with Atmire about a dedicated field for ORCIDs, based on the discussion in the <a href="https://wiki.duraspace.org/display/cmtygp/DCAT+Meeting+June+2017">June, 2017 DCAT meeting</a></li>
|
|
<li>Alan to ask about how to query external services like AGROVOC in the DSpace submission form</li>
|
|
</ul></li>
|
|
<li>Follow up with Atmire on the <a href="https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=510">ticket about ORCID metadata in DSpace</a></li>
|
|
<li>Follow up with Lili and Andrea about the pending CCAFS metadata and flagship updates</li>
|
|
</ul>
|
|
|
|
<h2 id="2017-08-11">2017-08-11</h2>
|
|
|
|
<ul>
|
|
<li>CGSpace had load issues and was throwing errors related to PostgreSQL</li>
|
|
<li>I told Tsega to reduce the max connections from 70 to 40 because actually each web application gets that limit and so for xmlui, oai, jspui, rest, etc it could be 70 x 4 = 280 connections depending on the load, and the PostgreSQL config itself is only 100!</li>
|
|
<li>I learned this on a recent discussion on the DSpace wiki</li>
|
|
<li>I need to either look into setting up a database pool through JNDI or increase the PostgreSQL max connections</li>
|
|
<li>Also, I need to find out where the load is coming from (rest?) and possibly block bots from accessing dynamic pages like Browse and Discover instead of just sending an X-Robots-Tag HTTP header</li>
|
|
<li>I noticed that Google has bitstreams from the <code>rest</code> interface in the search index. I need to ask on the dspace-tech mailing list to see what other people are doing about this, and maybe start issuing an <code>X-Robots-Tag: none</code> there!</li>
|
|
</ul>
|
|
|
|
<h2 id="2017-08-12">2017-08-12</h2>
|
|
|
|
<ul>
|
|
<li>I sent a message to the mailing list about the duplicate content issue with <code>/rest</code> and <code>/bitstream</code> URLs</li>
|
|
<li>Looking at the logs for the REST API on <code>/rest</code>, it looks like there is someone hammering doing testing or something on it…</li>
|
|
</ul>
|
|
|
|
<pre><code># awk '{print $1}' /var/log/nginx/rest.log.1 | sort -n | uniq -c | sort -h | tail -n 5
|
|
140 66.249.66.91
|
|
404 66.249.66.90
|
|
1479 50.116.102.77
|
|
9794 45.5.184.196
|
|
85736 70.32.83.92
|
|
</code></pre>
|
|
|
|
<ul>
|
|
<li>The top offender is 70.32.83.92 which is actually the same IP as ccafs.cgiar.org, so I will email the Macaroni Bros to see if they can test on DSpace Test instead</li>
|
|
<li>I’ve enabled logging of <code>/oai</code> requests on nginx as well so we can potentially determine bad actors here (also to see if anyone is actually using OAI!)</li>
|
|
</ul>
|
|
|
|
<pre><code> # log oai requests
|
|
location /oai {
|
|
access_log /var/log/nginx/oai.log;
|
|
proxy_pass http://tomcat_http;
|
|
}
|
|
</code></pre>
|
|
|
|
|
|
|
|
|
|
|
|
</article>
|
|
|
|
|
|
|
|
</div> <!-- /.blog-main -->
|
|
|
|
<aside class="col-sm-3 offset-sm-1 blog-sidebar">
|
|
|
|
|
|
|
|
<section class="sidebar-module">
|
|
<h4>Recent Posts</h4>
|
|
<ol class="list-unstyled">
|
|
|
|
|
|
<li><a href="/cgspace-notes/2017-08/">August, 2017</a></li>
|
|
|
|
<li><a href="/cgspace-notes/2017-07/">July, 2017</a></li>
|
|
|
|
<li><a href="/cgspace-notes/2017-06/">June, 2017</a></li>
|
|
|
|
<li><a href="/cgspace-notes/2017-05/">May, 2017</a></li>
|
|
|
|
<li><a href="/cgspace-notes/2017-04/">April, 2017</a></li>
|
|
|
|
</ol>
|
|
</section>
|
|
|
|
|
|
|
|
|
|
<section class="sidebar-module">
|
|
<h4>Links</h4>
|
|
<ol class="list-unstyled">
|
|
|
|
<li><a href="https://cgspace.cgiar.org">CGSpace</a></li>
|
|
|
|
<li><a href="https://dspacetest.cgiar.org">DSpace Test</a></li>
|
|
|
|
<li><a href="https://github.com/ilri/DSpace">CGSpace @ GitHub</a></li>
|
|
|
|
</ol>
|
|
</section>
|
|
|
|
</aside>
|
|
|
|
|
|
</div> <!-- /.row -->
|
|
</div> <!-- /.container -->
|
|
|
|
<footer class="blog-footer">
|
|
<p>
|
|
|
|
Blog template created by <a href="https://twitter.com/mdo">@mdo</a>, ported to Hugo by <a href='https://twitter.com/mralanorth'>@mralanorth</a>.
|
|
|
|
</p>
|
|
<p>
|
|
<a href="#">Back to top</a>
|
|
</p>
|
|
</footer>
|
|
|
|
</body>
|
|
|
|
</html>
|