2017-08-01
- Linode sent an alert that CGSpace (linode18) was using 350% CPU for the past two hours
- I looked in the Activity pane of the Admin Control Panel and it seems that Google, Baidu, Yahoo, and Bing are all crawling with massive numbers of bots concurrently (~100 total, mostly Baidu and Google)
- The good thing is that, according to
dspace.log.2017-08-01
, they are all using the same Tomcat session
- This means our Tomcat Crawler Session Valve is working
- But many of the bots are browsing dynamic URLs like:
- /handle/10568/3353/discover
- /handle/10568/16510/browse
- The
robots.txt
only blocks the top-level /discover
and /browse
URLs… we will need to find a way to forbid them from accessing these!
- Relevant issue from DSpace Jira (semi resolved in DSpace 6.0): https://jira.duraspace.org/browse/DS-2962
- It turns out that we’re already adding the
X-Robots-Tag "none"
HTTP header, but this only forbids the search engine from indexing the page, not crawling it!
- Also, the bot has to successfully browse the page first so it can receive the HTTP header…
- We might actually have to block these requests with HTTP 403 depending on the user agent
- Abenet pointed out that the CGIAR Library Historical Archive collection I sent July 20th only had ~100 entries, instead of 2415
- This was due to newline characters in the
dc.description.abstract
column, which caused OpenRefine to choke when exporting the CSV
- I exported a new CSV from the collection on DSpace Test and then manually removed the characters in vim using
g/^$/d
- Then I cleaned up the author authorities and HTML characters in OpenRefine and sent the file back to Abenet
2017-08-02
- Magdalena from CCAFS asked if there was a way to get the top ten items published in 2016 (note: not the top items in 2016!)
- I think Atmire’s Content and Usage Analysis module should be able to do this but I will have to look at the configuration and maybe email Atmire if I can’t figure it out
- I had a look at the moduel configuration and couldn’t figure out a way to do this, so I opened a ticket on the Atmire tracker
- Atmire responded about the missing workflow statistics issue a few weeks ago but I didn’t see it for some reason
- They said they added a publication and saw the workflow stat for the user, so I should try again and let them know
2017-08-05
- Usman from CIFOR emailed to ask about the status of our OAI tests for harvesting their DSpace repository
- I told him that the OAI appears to not be harvesting properly after the first sync, and that the control panel shows an “Internal error” for that collection:
- I don’t see anything related in our logs, so I asked him to check for our server’s IP in their logs
- Also, in the mean time I stopped the harvesting process, reset the status, and restarted the process via the Admin control panel (note: I didn’t reset the collection, just the harvester status!)
2017-08-07
- Apply Abenet’s corrections for the CGIAR Library’s Consortium subcommunity (697 records)
- I had to fix a few small things, like moving the
dc.title
column away from the beginning of the row, delete blank spaces in the abstract in vim using :g/^$/d
, add the dc.subject[en_US]
column back, as she had deleted it and DSpace didn’t detect the changes made there (we needed to blank the values instead)
2017-08-08
- Apply Abenet’s corrections for the CGIAR Library’s historic archive subcommunity (2415 records)
- I had to add the
dc.subject[en_US]
column back with blank values so that DSpace could detect the changes
- I applied the changes in 500 item batches
2017-08-09
- Run system updates on DSpace Test and reboot server
- Help ICARDA upgrade their MELSpace to DSpace 5.7 using the docker-dspace container
- We had to import the PostgreSQL dump to the PostgreSQL container using:
pg_restore -U postgres -d dspace blah.dump
- Otherwise, when using
-O
it messes up the permissions on the schema and DSpace can’t read it