2018-02-11 17:28:23 +01:00
<!DOCTYPE html>
2019-10-11 10:19:42 +02:00
< html lang = "en" >
2018-02-11 17:28:23 +01:00
< head >
< meta charset = "utf-8" >
< meta name = "viewport" content = "width=device-width, initial-scale=1, shrink-to-fit=no" >
2020-12-06 15:53:29 +01:00
2018-02-11 17:28:23 +01:00
< meta property = "og:title" content = "February, 2018" / >
< meta property = "og:description" content = "2018-02-01
Peter gave feedback on the dc.rights proof of concept that I had sent him last week
2020-01-27 15:20:44 +01:00
We don’ t need to distinguish between internal and external works, so that makes it just a simple list
2018-02-11 17:28:23 +01:00
Yesterday I figured out how to monitor DSpace sessions using JMX
2020-01-27 15:20:44 +01:00
I copied the logic in the jmx_tomcat_dbpools provided by Ubuntu’ s munin-plugins-java package and used the stuff I discovered about JMX in 2018-01
2018-02-11 17:28:23 +01:00
" />
< meta property = "og:type" content = "article" / >
2019-02-02 13:12:57 +01:00
< meta property = "og:url" content = "https://alanorth.github.io/cgspace-notes/2018-02/" / >
2019-08-08 17:10:44 +02:00
< meta property = "article:published_time" content = "2018-02-01T16:28:54+02:00" / >
2020-11-18 22:15:06 +01:00
< meta property = "article:modified_time" content = "2020-11-18T17:15:23+02:00" / >
2018-09-30 07:23:48 +02:00
2020-12-06 15:53:29 +01:00
2018-02-11 17:28:23 +01:00
< meta name = "twitter:card" content = "summary" / >
< meta name = "twitter:title" content = "February, 2018" / >
< meta name = "twitter:description" content = "2018-02-01
Peter gave feedback on the dc.rights proof of concept that I had sent him last week
2020-01-27 15:20:44 +01:00
We don’ t need to distinguish between internal and external works, so that makes it just a simple list
2018-02-11 17:28:23 +01:00
Yesterday I figured out how to monitor DSpace sessions using JMX
2020-01-27 15:20:44 +01:00
I copied the logic in the jmx_tomcat_dbpools provided by Ubuntu’ s munin-plugins-java package and used the stuff I discovered about JMX in 2018-01
2018-02-11 17:28:23 +01:00
"/>
2022-03-04 13:30:06 +01:00
< meta name = "generator" content = "Hugo 0.93.1" / >
2018-02-11 17:28:23 +01:00
< script type = "application/ld+json" >
{
"@context": "http://schema.org",
"@type": "BlogPosting",
"headline": "February, 2018",
2020-04-02 09:55:42 +02:00
"url": "https://alanorth.github.io/cgspace-notes/2018-02/",
2018-08-19 17:42:55 +02:00
"wordCount": "6410",
2019-10-11 10:19:42 +02:00
"datePublished": "2018-02-01T16:28:54+02:00",
2020-11-18 22:15:06 +01:00
"dateModified": "2020-11-18T17:15:23+02:00",
2018-02-11 17:28:23 +01:00
"author": {
"@type": "Person",
"name": "Alan Orth"
},
"keywords": "Notes"
}
< / script >
< link rel = "canonical" href = "https://alanorth.github.io/cgspace-notes/2018-02/" >
< title > February, 2018 | CGSpace Notes< / title >
2019-10-11 10:19:42 +02:00
2018-02-11 17:28:23 +01:00
<!-- combined, minified CSS -->
2020-01-23 19:19:38 +01:00
2021-01-24 08:46:27 +01:00
< link href = "https://alanorth.github.io/cgspace-notes/css/style.beb8012edc08ba10be012f079d618dc243812267efe62e11f22fe49618f976a4.css" rel = "stylesheet" integrity = "sha256-vrgBLtwIuhC+AS8HnWGNwkOBImfv5i4R8i/klhj5dqQ=" crossorigin = "anonymous" >
2019-10-11 10:19:42 +02:00
2018-02-11 17:28:23 +01:00
2020-01-28 11:01:42 +01:00
<!-- minified Font Awesome for SVG icons -->
2021-09-28 09:32:32 +02:00
< script defer src = "https://alanorth.github.io/cgspace-notes/js/fontawesome.min.f5072c55a0721857184db93a50561d7dc13975b4de2e19db7f81eb5f3fa57270.js" integrity = "sha256-9QcsVaByGFcYTbk6UFYdfcE5dbTeLhnbf4HrXz+lcnA=" crossorigin = "anonymous" > < / script >
2020-01-28 11:01:42 +01:00
2019-04-14 15:59:47 +02:00
<!-- RSS 2.0 feed -->
2018-02-11 17:28:23 +01:00
< / head >
< body >
< div class = "blog-masthead" >
< div class = "container" >
< nav class = "nav blog-nav" >
< a class = "nav-link " href = "https://alanorth.github.io/cgspace-notes/" > Home< / a >
< / nav >
< / div >
< / div >
2018-12-19 12:20:39 +01:00
2018-02-11 17:28:23 +01:00
< header class = "blog-header" >
< div class = "container" >
2019-10-11 10:19:42 +02:00
< h1 class = "blog-title" dir = "auto" > < a href = "https://alanorth.github.io/cgspace-notes/" rel = "home" > CGSpace Notes< / a > < / h1 >
< p class = "lead blog-description" dir = "auto" > Documenting day-to-day work on the < a href = "https://cgspace.cgiar.org" > CGSpace< / a > repository.< / p >
2018-02-11 17:28:23 +01:00
< / div >
< / header >
2018-12-19 12:20:39 +01:00
2018-02-11 17:28:23 +01:00
< div class = "container" >
< div class = "row" >
< div class = "col-sm-8 blog-main" >
< article class = "blog-post" >
< header >
2019-10-11 10:19:42 +02:00
< h2 class = "blog-post-title" dir = "auto" > < a href = "https://alanorth.github.io/cgspace-notes/2018-02/" > February, 2018< / a > < / h2 >
2020-11-16 09:54:00 +01:00
< p class = "blog-post-meta" >
< time datetime = "2018-02-01T16:28:54+02:00" > Thu Feb 01, 2018< / time >
in
2020-01-28 11:01:42 +01:00
< span class = "fas fa-folder" aria-hidden = "true" > < / span > < a href = "/cgspace-notes/categories/notes/" rel = "category tag" > Notes< / a >
2018-02-11 17:28:23 +01:00
< / p >
< / header >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-01" > 2018-02-01< / h2 >
2018-02-11 17:28:23 +01:00
< ul >
< li > Peter gave feedback on the < code > dc.rights< / code > proof of concept that I had sent him last week< / li >
2020-01-27 15:20:44 +01:00
< li > We don’ t need to distinguish between internal and external works, so that makes it just a simple list< / li >
2018-02-11 17:28:23 +01:00
< li > Yesterday I figured out how to monitor DSpace sessions using JMX< / li >
2020-01-27 15:20:44 +01:00
< li > I copied the logic in the < code > jmx_tomcat_dbpools< / code > provided by Ubuntu’ s < code > munin-plugins-java< / code > package and used the stuff I discovered about JMX < a href = "/cgspace-notes/2018-01/" > in 2018-01< / a > < / li >
2018-02-11 17:28:23 +01:00
< / ul >
2019-11-28 16:30:45 +01:00
< p > < img src = "/cgspace-notes/2018/02/jmx_dspace_sessions-day.png" alt = "DSpace Sessions" > < / p >
2018-02-11 17:28:23 +01:00
< ul >
< li > Run all system updates and reboot DSpace Test< / li >
2019-11-28 16:30:45 +01:00
< li > Wow, I packaged up the < code > jmx_dspace_sessions< / code > stuff in the < a href = "https://github.com/ilri/rmg-ansible-public" > Ansible infrastructure scripts< / a > and deployed it on CGSpace and it totally works:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > # munin-run jmx_dspace_sessions
2018-02-11 17:28:23 +01:00
v_.value 223
v_jspui.value 1
v_oai.value 0
2019-12-17 13:49:24 +01:00
< / code > < / pre > < h2 id = "2018-02-03" > 2018-02-03< / h2 >
2018-02-11 17:28:23 +01:00
< ul >
< li > Bram from Atmire responded about the high load caused by the Solr updater script and said it will be fixed with the updates to DSpace 5.8 compatibility: < a href = "https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=566" > https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=566< / a > < / li >
< li > We will close that ticket for now and wait for the 5.8 stuff: < a href = "https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560" > https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560< / a > < / li >
< li > I finally took a look at the second round of cleanups Peter had sent me for author affiliations in mid January< / li >
2019-11-28 16:30:45 +01:00
< li > After trimming whitespace and quickly scanning for encoding errors I applied them on CGSpace:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ ./delete-metadata-values.py -i /tmp/2018-02-03-Affiliations-12-deletions.csv -f cg.contributor.affiliation -m 211 -d dspace -u dspace -p ' fuuu'
$ ./fix-metadata-values.py -i /tmp/2018-02-03-Affiliations-1116-corrections.csv -f cg.contributor.affiliation -t correct -m 211 -d dspace -u dspace -p ' fuuu'
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Then I started a full Discovery reindex:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > $ time schedtool -D -e ionice -c2 -n7 nice -n19 [dspace]/bin/dspace index-discovery -b
2018-02-11 17:28:23 +01:00
real 96m39.823s
user 14m10.975s
sys 2m29.088s
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Generate a new list of affiliations for Peter to sort through:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = ' contributor' and qualifier = ' affiliation' ) AND resource_type_id = 2 group by text_value order by count desc) to /tmp/affiliations.csv with csv;
2018-02-11 17:28:23 +01:00
COPY 3723
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Oh, and it looks like we processed over 3.1 million requests in January, up from 2.9 million in < a href = "/cgspace-notes/2017-12/" > December< / a > :< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > # time zcat --force /var/log/nginx/* | grep -cE " [0-9]{1,2}/Jan/2018"
2018-02-11 17:28:23 +01:00
3126109
real 0m23.839s
user 0m27.225s
sys 0m1.905s
2019-12-17 13:49:24 +01:00
< / code > < / pre > < h2 id = "2018-02-05" > 2018-02-05< / h2 >
2018-02-11 17:28:23 +01:00
< ul >
2019-11-28 16:30:45 +01:00
< li > Toying with correcting authors with trailing spaces via PostgreSQL:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > dspace=# update metadatavalue set text_value=REGEXP_REPLACE(text_value, ' \s+$' , ' ' ) where resource_type_id=2 and metadata_field_id=3 and text_value ~ ' ^.*?\s+$' ;
2018-02-11 17:28:23 +01:00
UPDATE 20
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > I tried the < code > TRIM(TRAILING from text_value)< / code > function and it said it changed 20 items but the spaces didn’ t go away< / li >
2019-11-28 16:30:45 +01:00
< li > This is on a fresh import of the CGSpace database, but when I tried to apply it on CGSpace there were no changes detected. Weird.< / li >
< li > Anyways, Peter wants a new list of authors to clean up, so I exported another CSV:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > dspace=# \copy (select distinct text_value, count(*) as count from metadatavalue where metadata_field_id = (select metadata_field_id from metadatafieldregistry where element = ' contributor' and qualifier = ' author' ) AND resource_type_id = 2 group by text_value order by count desc) to /tmp/authors-2018-02-05.csv with csv;
2018-02-11 17:28:23 +01:00
COPY 55630
2019-12-17 13:49:24 +01:00
< / code > < / pre > < h2 id = "2018-02-06" > 2018-02-06< / h2 >
2018-02-11 17:28:23 +01:00
< ul >
< li > UptimeRobot says CGSpace is down this morning around 9:15< / li >
< li > I see 308 PostgreSQL connections in < code > pg_stat_activity< / code > < / li >
2019-11-28 16:30:45 +01:00
< li > The usage otherwise seemed low for REST/OAI as well as XMLUI in the last hour:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > # date
2018-02-11 17:28:23 +01:00
Tue Feb 6 09:30:32 UTC 2018
2022-03-04 13:30:06 +01:00
# cat /var/log/nginx/rest.log /var/log/nginx/rest.log.1 /var/log/nginx/oai.log /var/log/nginx/oai.log.1 | grep -E " 6/Feb/2018:(08|09)" | awk ' {print $1}' | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
2 223.185.41.40
2 66.249.64.14
2 77.246.52.40
4 157.55.39.82
4 193.205.105.8
5 207.46.13.63
5 207.46.13.64
6 154.68.16.34
7 207.46.13.66
1548 50.116.102.77
2022-03-04 13:30:06 +01:00
# cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 /var/log/nginx/error.log /var/log/nginx/error.log.1 | grep -E " 6/Feb/2018:(08|09)" | awk ' {print $1}' | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
77 213.55.99.121
86 66.249.64.14
101 104.196.152.243
103 207.46.13.64
118 157.55.39.82
133 207.46.13.66
136 207.46.13.63
156 68.180.228.157
295 197.210.168.174
752 144.76.64.79
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > I did notice in < code > /var/log/tomcat7/catalina.out< / code > that Atmire’ s update thing was running though< / li >
2019-11-28 16:30:45 +01:00
< li > So I restarted Tomcat and now everything is fine< / li >
< li > Next time I see that many database connections I need to save the output so I can analyze it later< / li >
2020-01-27 15:20:44 +01:00
< li > I’ m going to re-schedule the taskUpdateSolrStatsMetadata task as < a href = "https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=566" > Bram detailed in ticket 566< / a > to see if it makes CGSpace stop crashing every morning< / li >
2020-11-18 22:15:06 +01:00
< li > If I move the task from 3AM to 3PM, ideally CGSpace will stop crashing in the morning, or start crashing ~12 hours later< / li >
2019-11-28 16:30:45 +01:00
< li > Eventually Atmire has said that there will be a fix for this high load caused by their script, but it will come with the 5.8 compatability they are already working on< / li >
< li > I re-deployed CGSpace with the new task time of 3PM, ran all system updates, and restarted the server< / li >
< li > Also, I changed the name of the DSpace fallback pool on DSpace Test and CGSpace to be called ‘ dspaceCli’ so that I can distinguish it in < code > pg_stat_activity< / code > < / li >
< li > I implemented some changes to the pooling in the < a href = "https://github.com/ilri/rmg-ansible-public" > Ansible infrastructure scripts< / a > so that each DSpace web application can use its own pool (web, api, and solr)< / li >
< li > Each pool uses its own name and hopefully this should help me figure out which one is using too many connections next time CGSpace goes down< / li >
< li > Also, this will mean that when a search bot comes along and hammers the XMLUI, the REST and OAI applications will be fine< / li >
2020-01-27 15:20:44 +01:00
< li > I’ m not actually sure if the Solr web application uses the database though, so I’ ll have to check later and remove it if necessary< / li >
2019-11-28 16:30:45 +01:00
< li > I deployed the changes on DSpace Test only for now, so I will monitor and make them on CGSpace later this week< / li >
2018-02-11 17:28:23 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-07" > 2018-02-07< / h2 >
2018-02-11 17:28:23 +01:00
< ul >
< li > Abenet wrote to ask a question about the ORCiD lookup not working for one CIAT user on CGSpace< / li >
2020-01-27 15:20:44 +01:00
< li > I tried on DSpace Test and indeed the lookup just doesn’ t work!< / li >
2018-02-11 17:28:23 +01:00
< li > The ORCiD code in DSpace appears to be using < code > http://pub.orcid.org/< / code > , but when I go there in the browser it redirects me to < code > https://pub.orcid.org/v2.0/< / code > < / li >
< li > According to < a href = "https://groups.google.com/forum/#!topic/orcid-api-users/qfg-HwAB1bk" > the announcement< / a > the v1 API was moved from < code > http://pub.orcid.org/< / code > to < code > https://pub.orcid.org/v1.2< / code > until March 1st when it will be discontinued for good< / li >
2020-01-27 15:20:44 +01:00
< li > But the old URL is hard coded in DSpace and it doesn’ t work anyways, because it currently redirects you to < code > https://pub.orcid.org/v2.0/v1.2< / code > < / li >
2018-02-11 17:28:23 +01:00
< li > So I guess we have to disable that shit once and for all and switch to a controlled vocabulary< / li >
< li > CGSpace crashed again, this time around < code > Wed Feb 7 11:20:28 UTC 2018< / code > < / li >
2019-11-28 16:30:45 +01:00
< li > I took a few snapshots of the PostgreSQL activity at the time and as the minutes went on and the connections were very high at first but reduced on their own:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ psql -c ' select * from pg_stat_activity' > /tmp/pg_stat_activity.txt
$ grep -c ' PostgreSQL JDBC' /tmp/pg_stat_activity*
2018-02-11 17:28:23 +01:00
/tmp/pg_stat_activity1.txt:300
/tmp/pg_stat_activity2.txt:272
/tmp/pg_stat_activity3.txt:168
/tmp/pg_stat_activity4.txt:5
/tmp/pg_stat_activity5.txt:6
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Interestingly, all of those 751 connections were idle!< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ grep " PostgreSQL JDBC" /tmp/pg_stat_activity* | grep -c idle
2018-02-11 17:28:23 +01:00
751
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Since I was restarting Tomcat anyways, I decided to deploy the changes to create two different pools for web and API apps< / li >
< li > Looking the Munin graphs, I can see that there were almost double the normal number of DSpace sessions at the time of the crash (and also yesterday!):< / li >
2018-02-11 17:28:23 +01:00
< / ul >
2019-11-28 16:30:45 +01:00
< p > < img src = "/cgspace-notes/2018/02/jmx_dspace-sessions-day.png" alt = "DSpace Sessions" > < / p >
2018-02-11 17:28:23 +01:00
< ul >
2019-11-28 16:30:45 +01:00
< li > Indeed it seems like there were over 1800 sessions today around the hours of 10 and 11 AM:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ grep -E ' ^2018-02-07 (10|11)' dspace.log.2018-02-07 | grep -o -E ' session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
1828
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > CGSpace went down again a few hours later, and now the connections to the dspaceWeb pool are maxed at 250 (the new limit I imposed with the new separate pool scheme)< / li >
2020-01-27 15:20:44 +01:00
< li > What’ s interesting is that the DSpace log says the connections are all busy:< / li >
2019-11-28 16:30:45 +01:00
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-127.0.0.1-8443-exec-328] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:250; busy:250; idle:0; lastwait:5000].
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > … but in PostgreSQL I see them < code > idle< / code > or < code > idle in transaction< / code > :< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ psql -c ' select * from pg_stat_activity' | grep -c dspaceWeb
2018-02-11 17:28:23 +01:00
250
2022-03-04 13:30:06 +01:00
$ psql -c ' select * from pg_stat_activity' | grep dspaceWeb | grep -c idle
2018-02-11 17:28:23 +01:00
250
2022-03-04 13:30:06 +01:00
$ psql -c ' select * from pg_stat_activity' | grep dspaceWeb | grep -c " idle in transaction"
2018-02-11 17:28:23 +01:00
187
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > What the fuck, does DSpace think all connections are busy?< / li >
2020-01-27 15:20:44 +01:00
< li > I suspect these are issues with abandoned connections or maybe a leak, so I’ m going to try adding the < code > removeAbandoned='true'< / code > parameter which is apparently off by default< / li >
< li > I will try < code > testOnReturn='true'< / code > too, just to add more validation, because I’ m fucking grasping at straws< / li >
2019-11-28 16:30:45 +01:00
< li > Also, WTF, there was a heap space error randomly in catalina.out:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > Wed Feb 07 15:01:54 UTC 2018 | Query:containerItem:91917 AND type:2
2022-03-04 13:30:06 +01:00
Exception in thread " http-bio-127.0.0.1-8081-exec-58" java.lang.OutOfMemoryError: Java heap space
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > I’ m trying to find a way to determine what was using all those Tomcat sessions, but parsing the DSpace log is hard because some IPs are IPv6, which contain colons!< / li >
2019-11-28 16:30:45 +01:00
< li > Looking at the first crash this morning around 11, I see these IPv4 addresses making requests around 10 and 11AM:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ grep -E ' ^2018-02-07 (10|11)' dspace.log.2018-02-07 | grep -o -E ' ip_addr=[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | sort -n | uniq -c | sort -n | tail -n 20
2019-11-28 16:30:45 +01:00
34 ip_addr=46.229.168.67
34 ip_addr=46.229.168.73
37 ip_addr=46.229.168.76
40 ip_addr=34.232.65.41
41 ip_addr=46.229.168.71
44 ip_addr=197.210.168.174
55 ip_addr=181.137.2.214
55 ip_addr=213.55.99.121
58 ip_addr=46.229.168.65
64 ip_addr=66.249.66.91
67 ip_addr=66.249.66.90
71 ip_addr=207.46.13.54
78 ip_addr=130.82.1.40
104 ip_addr=40.77.167.36
151 ip_addr=68.180.228.157
174 ip_addr=207.46.13.135
194 ip_addr=54.83.138.123
198 ip_addr=40.77.167.62
210 ip_addr=207.46.13.71
214 ip_addr=104.196.152.243
< / code > < / pre > < ul >
< li > These IPs made thousands of sessions today:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ grep 104.196.152.243 dspace.log.2018-02-07 | grep -o -E ' session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
530
2022-03-04 13:30:06 +01:00
$ grep 207.46.13.71 dspace.log.2018-02-07 | grep -o -E ' session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
859
2022-03-04 13:30:06 +01:00
$ grep 40.77.167.62 dspace.log.2018-02-07 | grep -o -E ' session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
610
2022-03-04 13:30:06 +01:00
$ grep 54.83.138.123 dspace.log.2018-02-07 | grep -o -E ' session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
8
2022-03-04 13:30:06 +01:00
$ grep 207.46.13.135 dspace.log.2018-02-07 | grep -o -E ' session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
826
2022-03-04 13:30:06 +01:00
$ grep 68.180.228.157 dspace.log.2018-02-07 | grep -o -E ' session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
727
2022-03-04 13:30:06 +01:00
$ grep 40.77.167.36 dspace.log.2018-02-07 | grep -o -E ' session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
181
2022-03-04 13:30:06 +01:00
$ grep 130.82.1.40 dspace.log.2018-02-07 | grep -o -E ' session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
24
2022-03-04 13:30:06 +01:00
$ grep 207.46.13.54 dspace.log.2018-02-07 | grep -o -E ' session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
166
2022-03-04 13:30:06 +01:00
$ grep 46.229.168 dspace.log.2018-02-07 | grep -o -E ' session_id=[A-Z0-9]{32}' | sort -n | uniq | wc -l
2018-02-11 17:28:23 +01:00
992
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > Let’ s investigate who these IPs belong to:
2018-02-11 17:28:23 +01:00
< ul >
< li > 104.196.152.243 is CIAT, which is already marked as a bot via nginx!< / li >
2020-01-27 15:20:44 +01:00
< li > 207.46.13.71 is Bing, which is already marked as a bot in Tomcat’ s Crawler Session Manager Valve!< / li >
< li > 40.77.167.62 is Bing, which is already marked as a bot in Tomcat’ s Crawler Session Manager Valve!< / li >
< li > 207.46.13.135 is Bing, which is already marked as a bot in Tomcat’ s Crawler Session Manager Valve!< / li >
< li > 68.180.228.157 is Yahoo, which is already marked as a bot in Tomcat’ s Crawler Session Manager Valve!< / li >
< li > 40.77.167.36 is Bing, which is already marked as a bot in Tomcat’ s Crawler Session Manager Valve!< / li >
< li > 207.46.13.54 is Bing, which is already marked as a bot in Tomcat’ s Crawler Session Manager Valve!< / li >
< li > 46.229.168.x is Semrush, which is already marked as a bot in Tomcat’ s Crawler Session Manager Valve!< / li >
2019-11-28 16:30:45 +01:00
< / ul >
< / li >
2020-01-27 15:20:44 +01:00
< li > Nice, so these are all known bots that are already crammed into one session by Tomcat’ s Crawler Session Manager Valve.< / li >
< li > What in the actual fuck, why is our load doing this? It’ s gotta be something fucked up with the database pool being “ busy” but everything is fucking idle< / li >
2019-11-28 16:30:45 +01:00
< li > One that I should probably add in nginx is 54.83.138.123, which is apparently the following user agent:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > BUbiNG (+http://law.di.unimi.it/BUbiNG.html)
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > This one makes two thousand requests per day or so recently:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > # grep -c BUbiNG /var/log/nginx/access.log /var/log/nginx/access.log.1
2018-02-11 17:28:23 +01:00
/var/log/nginx/access.log:1925
/var/log/nginx/access.log.1:2029
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > And they have 30 IPs, so fuck that shit I’ m going to add them to the Tomcat Crawler Session Manager Valve nowwww< / li >
2019-11-28 16:30:45 +01:00
< li > Lots of discussions on the dspace-tech mailing list over the last few years about leaky transactions being a known problem with DSpace< / li >
< li > Helix84 recommends restarting PostgreSQL instead of Tomcat because it restarts quicker< / li >
< li > This is how the connections looked when it crashed this afternoon:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ psql -c ' select * from pg_stat_activity' | grep -o -E ' (dspaceWeb|dspaceApi|dspaceCli)' | sort | uniq -c
2019-11-28 16:30:45 +01:00
5 dspaceApi
290 dspaceWeb
< / code > < / pre > < ul >
< li > This is how it is right now:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ psql -c ' select * from pg_stat_activity' | grep -o -E ' (dspaceWeb|dspaceApi|dspaceCli)' | sort | uniq -c
2019-11-28 16:30:45 +01:00
5 dspaceApi
5 dspaceWeb
< / code > < / pre > < ul >
< li > So is this just some fucked up XMLUI database leaking?< / li >
2020-01-27 15:20:44 +01:00
< li > I notice there is an issue (that I’ ve probably noticed before) on the Jira tracker about this that was fixed in DSpace 5.7: < a href = "https://jira.duraspace.org/browse/DS-3551" > https://jira.duraspace.org/browse/DS-3551< / a > < / li >
< li > I seriously doubt this leaking shit is fixed for sure, but I’ m gonna cherry-pick all those commits and try them on DSpace Test and probably even CGSpace because I’ m fed up with this shit< / li >
< li > I cherry-picked all the commits for DS-3551 but it won’ t build on our current DSpace 5.5!< / li >
2019-11-28 16:30:45 +01:00
< li > I sent a message to the dspace-tech mailing list asking why DSpace thinks these connections are busy when PostgreSQL says they are idle< / li >
2018-02-11 17:28:23 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-10" > 2018-02-10< / h2 >
2018-02-11 17:28:23 +01:00
< ul >
< li > I tried to disable ORCID lookups but keep the existing authorities< / li >
2019-11-28 16:30:45 +01:00
< li > This item has an ORCID for Ralf Kiese: http://localhost:8080/handle/10568/89897< / li >
2020-01-27 15:20:44 +01:00
< li > Switch authority.controlled off and change authorLookup to lookup, and the ORCID badge doesn’ t show up on the item< / li >
2019-11-28 16:30:45 +01:00
< li > Leave all settings but change choices.presentation to lookup and ORCID badge is there and item submission uses LC Name Authority and it breaks with this error:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > Field dc_contributor_author has choice presentation of type " select" , it may NOT be authority-controlled.
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > If I change choices.presentation to suggest it give this error:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > xmlui.mirage2.forms.instancedCompositeFields.noSuggestionError
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > So I don’ t think we can disable the ORCID lookup function and keep the ORCID badges< / li >
2018-02-11 17:28:23 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-11" > 2018-02-11< / h2 >
2018-02-11 17:28:23 +01:00
< ul >
2019-11-28 16:30:45 +01:00
< li > Magdalena from CCAFS emailed to ask why one of their items has such a weird thumbnail: < a href = "https://cgspace.cgiar.org/handle/10568/90735" > 10568/90735< / a > < / li >
2018-02-11 17:28:23 +01:00
< / ul >
2019-11-28 16:30:45 +01:00
< p > < img src = "/cgspace-notes/2018/02/CCAFS_WP_223.pdf.jpg" alt = "Weird thumbnail" > < / p >
2018-02-11 17:28:23 +01:00
< ul >
2019-11-28 16:30:45 +01:00
< li > I downloaded the PDF and manually generated a thumbnail with ImageMagick and it looked better:< / li >
2019-05-05 15:45:12 +02:00
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > $ convert CCAFS_WP_223.pdf\[0\] -profile /usr/local/share/ghostscript/9.22/iccprofiles/default_cmyk.icc -thumbnail 600x600 -flatten -profile /usr/local/share/ghostscript/9.22/iccprofiles/default_rgb.icc CCAFS_WP_223.jpg
2019-11-28 16:30:45 +01:00
< / code > < / pre > < p > < img src = "/cgspace-notes/2018/02/CCAFS_WP_223.jpg" alt = "Manual thumbnail" > < / p >
2018-02-11 17:28:23 +01:00
< ul >
2019-11-28 16:30:45 +01:00
< li > Peter sent me corrected author names last week but the file encoding is messed up:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > $ isutf8 authors-2018-02-05.csv
2018-02-11 17:28:23 +01:00
authors-2018-02-05.csv: line 100, char 18, byte 4179: After a first byte between E1 and EC, expecting the 2nd byte between 80 and BF.
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > The < code > isutf8< / code > program comes from < code > moreutils< / code > < / li >
< li > Line 100 contains: Galiè, Alessandra< / li >
< li > In other news, psycopg2 is splitting their package in pip, so to install the binary wheel distribution you need to use < code > pip install psycopg2-binary< / code > < / li >
< li > See: < a href = "http://initd.org/psycopg/articles/2018/02/08/psycopg-274-released/" > http://initd.org/psycopg/articles/2018/02/08/psycopg-274-released/< / a > < / li >
< li > I updated my < code > fix-metadata-values.py< / code > and < code > delete-metadata-values.py< / code > scripts on the scripts page: < a href = "https://github.com/ilri/DSpace/wiki/Scripts" > https://github.com/ilri/DSpace/wiki/Scripts< / a > < / li >
< li > I ran the 342 author corrections (after trimming whitespace and excluding those with < code > ||< / code > and other syntax errors) on CGSpace:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ ./fix-metadata-values.py -i Correct-342-Authors-2018-02-11.csv -f dc.contributor.author -t correct -m 3 -d dspace -u dspace -p ' fuuu'
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Then I ran a full Discovery re-indexing:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ export JAVA_OPTS=" -Dfile.encoding=UTF-8 -Xmx1024m"
2018-02-11 17:28:23 +01:00
$ time schedtool -D -e ionice -c2 -n7 nice -n19 dspace index-discovery -b
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > That reminds me that Bizu had asked me to fix some of Alan Duncan’ s names in December< / li >
2022-03-04 13:30:06 +01:00
< li > I see he actually has some variations with “ Duncan, Alan J.” : < a href = "https://cgspace.cgiar.org/discover?filtertype_1=author&filter_relational_operator_1=contains&filter_1=Duncan%2C+Alan&submit_apply_filter=&query=" > https://cgspace.cgiar.org/discover?filtertype_1=author& filter_relational_operator_1=contains& filter_1=Duncan%2C+Alan& submit_apply_filter=& query=< / a > < / li >
2019-11-28 16:30:45 +01:00
< li > I will just update those for her too and then restart the indexing:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > dspace=# select distinct text_value, authority, confidence from metadatavalue where resource_type_id=2 and metadata_field_id=3 and text_value like ' %Duncan, Alan%' ;
2019-11-28 16:30:45 +01:00
text_value | authority | confidence
2018-02-11 17:28:23 +01:00
-----------------+--------------------------------------+------------
2019-11-28 16:30:45 +01:00
Duncan, Alan J. | 5ff35043-942e-4d0a-b377-4daed6e3c1a3 | 600
Duncan, Alan J. | 62298c84-4d9d-4b83-a932-4a9dd4046db7 | -1
Duncan, Alan J. | | -1
Duncan, Alan | a6486522-b08a-4f7a-84f9-3a73ce56034d | 600
Duncan, Alan J. | cd0e03bf-92c3-475f-9589-60c5b042ea60 | -1
Duncan, Alan J. | a6486522-b08a-4f7a-84f9-3a73ce56034d | -1
Duncan, Alan J. | 5ff35043-942e-4d0a-b377-4daed6e3c1a3 | -1
Duncan, Alan J. | a6486522-b08a-4f7a-84f9-3a73ce56034d | 600
2018-02-11 17:28:23 +01:00
(8 rows)
dspace=# begin;
2022-03-04 13:30:06 +01:00
dspace=# update metadatavalue set text_value=' Duncan, Alan' , authority=' a6486522-b08a-4f7a-84f9-3a73ce56034d' , confidence=600 where resource_type_id=2 and metadata_field_id=3 and text_value like ' Duncan, Alan%' ;
2018-02-11 17:28:23 +01:00
UPDATE 216
2022-03-04 13:30:06 +01:00
dspace=# select distinct text_value, authority, confidence from metadatavalue where resource_type_id=2 and metadata_field_id=3 and text_value like ' %Duncan, Alan%' ;
2019-11-28 16:30:45 +01:00
text_value | authority | confidence
2018-02-11 17:28:23 +01:00
--------------+--------------------------------------+------------
2019-11-28 16:30:45 +01:00
Duncan, Alan | a6486522-b08a-4f7a-84f9-3a73ce56034d | 600
2018-02-11 17:28:23 +01:00
(1 row)
dspace=# commit;
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Run all system updates on DSpace Test (linode02) and reboot it< / li >
< li > I wrote a Python script (< a href = "https://gist.github.com/alanorth/57a88379126d844563c1410bd7b8d12b" > < code > resolve-orcids-from-solr.py< / code > < / a > ) using SolrClient to parse the Solr authority cache for ORCID IDs< / li >
< li > We currently have 1562 authority records with ORCID IDs, and 624 unique IDs< / li >
< li > We can use this to build a controlled vocabulary of ORCID IDs for new item submissions< / li >
2020-01-27 15:20:44 +01:00
< li > I don’ t know how to add ORCID IDs to existing items yet… some more querying of PostgreSQL for authority values perhaps?< / li >
2019-11-28 16:30:45 +01:00
< li > I added the script to the < a href = "https://github.com/ilri/DSpace/wiki/Scripts" > ILRI DSpace wiki on GitHub< / a > < / li >
2018-02-11 17:28:23 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-12" > 2018-02-12< / h2 >
2018-02-12 10:17:26 +01:00
< ul >
< li > Follow up with Atmire on the < a href = "https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560" > DSpace 5.8 Compatibility ticket< / a > to ask again if they want me to send them a DSpace 5.8 branch to work on< / li >
< li > Abenet asked if there was a way to get the number of submissions she and Bizuwork did< / li >
< li > I said that the Atmire Workflow Statistics module was supposed to be able to do that< / li >
2020-01-27 15:20:44 +01:00
< li > We had tried it in < a href = "/cgspace-notes/2017-06/" > June, 2017< / a > and found that it didn’ t work< / li >
< li > Atmire sent us some fixes but they didn’ t work either< / li >
2018-02-12 10:17:26 +01:00
< li > I just tried the branch with the fixes again and it indeed does not work:< / li >
< / ul >
2019-11-28 16:30:45 +01:00
< p > < img src = "/cgspace-notes/2018/02/atmire-workflow-statistics.png" alt = "Atmire Workflow Statistics No Data Available" > < / p >
2018-02-12 10:17:26 +01:00
< ul >
2020-01-27 15:20:44 +01:00
< li > I see that in < a href = "/cgspace-notes/2017-04/" > April, 2017< / a > I just used a SQL query to get a user’ s submissions by checking the < code > dc.description.provenance< / code > field< / li >
2019-11-28 16:30:45 +01:00
< li > So for Abenet, I can check her submissions in December, 2017 with:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > dspace=# select * from metadatavalue where resource_type_id=2 and metadata_field_id=28 and text_value ~ ' ^Submitted.*yabowork.*2017-12.*' ;
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > I emailed Peter to ask whether we can move DSpace Test to a new Linode server and attach 300 GB of disk space to it< / li >
2020-01-27 15:20:44 +01:00
< li > This would be using < a href = "https://www.linode.com/blockstorage" > Linode’ s new block storage volumes< / a > < / li >
2019-11-28 16:30:45 +01:00
< li > I think our current $40/month Linode has enough CPU and memory capacity, but we need more disk space< / li >
2020-01-27 15:20:44 +01:00
< li > I think I’ d probably just attach the block storage volume and mount it on /home/dspace< / li >
2019-11-28 16:30:45 +01:00
< li > Ask Peter about < code > dc.rights< / code > on DSpace Test again, if he likes it then we should move it to CGSpace soon< / li >
2018-02-12 10:33:00 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-13" > 2018-02-13< / h2 >
2018-02-13 14:16:18 +01:00
< ul >
< li > Peter said he was getting a “ socket closed” error on CGSpace< / li >
2019-11-28 16:30:45 +01:00
< li > I looked in the dspace.log.2018-02-13 and saw one recent one:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > 2018-02-13 12:50:13,656 ERROR org.dspace.storage.rdbms.DatabaseManager @ SQL QueryTable Error -
2018-02-13 14:16:18 +01:00
org.postgresql.util.PSQLException: An I/O error occurred while sending to the backend.
...
Caused by: java.net.SocketException: Socket closed
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Could be because of the < code > removeAbandoned=" true" < / code > that I enabled in the JDBC connection pool last week?< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ grep -c " java.net.SocketException: Socket closed" dspace.log.2018-02-*
2018-02-13 14:16:18 +01:00
dspace.log.2018-02-01:0
dspace.log.2018-02-02:0
dspace.log.2018-02-03:0
dspace.log.2018-02-04:0
dspace.log.2018-02-05:0
dspace.log.2018-02-06:0
dspace.log.2018-02-07:0
dspace.log.2018-02-08:1
dspace.log.2018-02-09:6
dspace.log.2018-02-10:0
dspace.log.2018-02-11:3
dspace.log.2018-02-12:0
dspace.log.2018-02-13:4
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > I apparently added that on 2018-02-07 so it could be, as I don’ t see any of those socket closed errors in 2018-01’ s logs!< / li >
2019-11-28 16:30:45 +01:00
< li > I will increase the removeAbandonedTimeout from its default of 60 to 90 and enable logAbandoned< / li >
2020-01-27 15:20:44 +01:00
< li > Peter hit this issue one more time, and this is apparently what Tomcat’ s catalina.out log says when an abandoned connection is removed:< / li >
2019-11-28 16:30:45 +01:00
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > Feb 13, 2018 2:05:42 PM org.apache.tomcat.jdbc.pool.ConnectionPool abandon
2018-02-13 16:50:12 +01:00
WARNING: Connection has been abandoned PooledConnection[org.postgresql.jdbc.PgConnection@22e107be]:java.lang.Exception
2019-12-17 13:49:24 +01:00
< / code > < / pre > < h2 id = "2018-02-14" > 2018-02-14< / h2 >
2018-02-14 12:56:18 +01:00
< ul >
< li > Skype with Peter and the Addis team to discuss what we need to do for the ORCIDs in the immediate future< / li >
2020-01-27 15:20:44 +01:00
< li > We said we’ d start with a controlled vocabulary for < code > cg.creator.id< / code > on the DSpace Test submission form, where we store the author name and the ORCID in some format like: Alan S. Orth (0000-0002-1735-7458)< / li >
2018-02-14 12:56:18 +01:00
< li > Eventually we need to find a way to print the author names with links to their ORCID profiles< / li >
< li > Abenet will send an email to the partners to give us ORCID IDs for their authors and to stress that they update their name format on ORCID.org if they want it in a special way< / li >
< li > I sent the Codeobia guys a question to ask how they prefer that we store the IDs, ie one of:
< ul >
< li > Alan Orth - 0000-0002-1735-7458< / li >
< li > Alan Orth: 0000-0002-1735-7458< / li >
< li > Alan S. Orth (0000-0002-1735-7458)< / li >
2019-11-28 16:30:45 +01:00
< / ul >
< / li >
2018-02-14 12:56:18 +01:00
< li > Atmire responded on the < a href = "https://tracker.atmire.com/tickets-cgiar-ilri/view-ticket?id=560" > DSpace 5.8 compatability ticket< / a > and said they will let me know if they they want me to give them a clean 5.8 branch< / li >
2019-11-28 16:30:45 +01:00
< li > I formatted my list of ORCID IDs as a controlled vocabulary, sorted alphabetically, then ran through XML tidy:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > $ sort cgspace-orcids.txt > dspace/config/controlled-vocabularies/cg-creator-id.xml
2018-02-14 12:56:18 +01:00
$ add XML formatting...
$ tidy -xml -iq -m -w 0 dspace/config/controlled-vocabularies/cg-creator-id.xml
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > It seems the tidy fucks up accents, for example it turns < code > Adriana Tofiño (0000-0001-7115-7169)< / code > into < code > Adriana Tofiño (0000-0001-7115-7169)< / code > < / li >
< li > We need to force UTF-8:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > $ tidy -xml -utf8 -iq -m -w 0 dspace/config/controlled-vocabularies/cg-creator-id.xml
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > This preserves special accent characters< / li >
< li > I tested the display and store of these in the XMLUI and PostgreSQL and it looks good< / li >
< li > Sisay exported all ILRI, CIAT, etc authors from ORCID and sent a list of 600+< / li >
< li > Peter combined it with mine and we have 1204 unique ORCIDs!< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ grep -coE ' [A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' CGcenter_ORCID_ID_combined.csv
2018-02-14 15:45:03 +01:00
1204
2022-03-04 13:30:06 +01:00
$ grep -oE ' [A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' CGcenter_ORCID_ID_combined.csv | sort | uniq | wc -l
2018-02-14 15:45:03 +01:00
1204
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Also, save that regex for the future because it will be very useful!< / li >
2022-03-04 13:30:06 +01:00
< li > CIAT sent a list of their authors’ ORCIDs and combined with ours there are now 1227:< / li >
2019-11-28 16:30:45 +01:00
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ cat CGcenter_ORCID_ID_combined.csv ciat-orcids.txt | grep -oE ' [A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort | uniq | wc -l
2018-02-14 15:45:03 +01:00
1227
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > There are some formatting issues with names in Peter’ s list, so I should remember to re-generate the list of names from ORCID’ s API once we’ re done< / li >
2019-11-28 16:30:45 +01:00
< li > The < code > dspace cleanup -v< / code > currently fails on CGSpace with the following:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > - Deleting bitstream record from database (ID: 149473)
2022-03-04 13:30:06 +01:00
Error: ERROR: update or delete on table " bitstream" violates foreign key constraint " bundle_primary_bitstream_id_fkey" on table " bundle"
Detail: Key (bitstream_id)=(149473) is still referenced from table " bundle" .
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > The solution is to update the bitstream table, as I’ ve discovered several other times in 2016 and 2017:< / li >
2019-11-28 16:30:45 +01:00
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ psql dspace -c ' update bundle set primary_bitstream_id=NULL where primary_bitstream_id in (149473);'
2018-02-14 15:45:03 +01:00
UPDATE 1
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Then the cleanup process will continue for awhile and hit another foreign key conflict, and eventually it will complete after you manually resolve them all< / li >
2018-02-14 12:56:18 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-15" > 2018-02-15< / h2 >
2018-02-15 13:00:34 +01:00
< ul >
< li > Altmetric seems to be indexing DSpace Test for some reason:
< ul >
< li > See this item on DSpace Test: < a href = "https://dspacetest.cgiar.org/handle/10568/78450" > https://dspacetest.cgiar.org/handle/10568/78450< / a > < / li >
< li > See the corresponding page on Altmetric: < a href = "https://www.altmetric.com/details/handle/10568/78450" > https://www.altmetric.com/details/handle/10568/78450< / a > < / li >
2019-11-28 16:30:45 +01:00
< / ul >
< / li >
2020-01-27 15:20:44 +01:00
< li > And this item doesn’ t even exist on CGSpace!< / li >
2018-02-15 13:00:34 +01:00
< li > Start working on XMLUI item display code for ORCIDs< / li >
< li > Send emails to Macaroni Bros and Usman at CIFOR about ORCID metadata< / li >
2018-02-15 21:31:11 +01:00
< li > CGSpace crashed while I was driving to Tel Aviv, and was down for four hours!< / li >
< li > I only looked quickly in the logs but saw a bunch of database errors< / li >
2019-11-28 16:30:45 +01:00
< li > PostgreSQL connections are currently:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ psql -c ' select * from pg_stat_activity' | grep -o -E ' (dspaceWeb|dspaceApi|dspaceCli)' | uniq -c
2019-11-28 16:30:45 +01:00
2 dspaceApi
1 dspaceWeb
3 dspaceApi
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > I see shitloads of memory errors in Tomcat’ s logs:< / li >
2019-11-28 16:30:45 +01:00
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > # grep -c " Java heap space" /var/log/tomcat7/catalina.out
2018-02-15 21:31:11 +01:00
56
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > And shit tons of database connections abandoned:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > # grep -c ' org.apache.tomcat.jdbc.pool.ConnectionPool abandon' /var/log/tomcat7/catalina.out
2018-02-15 21:31:11 +01:00
612
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > I have no fucking idea why it crashed< / li >
< li > The XMLUI activity looks like:< / li >
2019-05-05 15:45:12 +02:00
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > # cat /var/log/nginx/access.log /var/log/nginx/access.log.1 /var/log/nginx/library-access.log /var/log/nginx/library-access.log.1 /var/log/nginx/error.log /var/log/nginx/error.log.1 | grep -E " 15/Feb/2018" | awk ' {print $1}' | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
715 63.143.42.244
746 213.55.99.121
886 68.180.228.157
967 66.249.66.90
1013 216.244.66.245
1177 197.210.168.174
1419 207.46.13.159
1512 207.46.13.59
1554 207.46.13.157
2018 104.196.152.243
2019-12-17 13:49:24 +01:00
< / code > < / pre > < h2 id = "2018-02-17" > 2018-02-17< / h2 >
2018-02-17 10:37:57 +01:00
< ul >
< li > Peter pointed out that we had an incorrect sponsor in the controlled vocabulary: < code > U.S. Agency for International Development< / code > → < code > United States Agency for International Development< / code > < / li >
2019-12-30 10:28:49 +01:00
< li > I made a pull request to fix it ((#354)[https://github.com/ilri/DSpace/pull/354])< / li >
2019-11-28 16:30:45 +01:00
< li > I should remember to update existing values in PostgreSQL too:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > dspace=# update metadatavalue set text_value=' United States Agency for International Development' where resource_type_id=2 and metadata_field_id=29 and text_value like ' %U.S. Agency for International Development%' ;
2018-02-17 10:37:57 +01:00
UPDATE 2
2019-12-17 13:49:24 +01:00
< / code > < / pre > < h2 id = "2018-02-18" > 2018-02-18< / h2 >
2018-02-18 10:21:16 +01:00
< ul >
2020-01-27 15:20:44 +01:00
< li > ICARDA’ s Mohamed Salem pointed out that it would be easiest to format the < code > cg.creator.id< / code > field like “ Alan Orth: 0000-0002-1735-7458” because no name will have a “ :” so it’ s easier to split on< / li >
2018-02-18 10:21:16 +01:00
< li > I finally figured out a few ways to extract ORCID iDs from metadata using XSLT and display them in the XMLUI:< / li >
< / ul >
2019-11-28 16:30:45 +01:00
< p > < img src = "/cgspace-notes/2018/02/xmlui-orcid-display.png" alt = "Displaying ORCID iDs in XMLUI" > < / p >
2018-02-18 10:21:16 +01:00
< ul >
2020-01-27 15:20:44 +01:00
< li > The one on the bottom left uses a similar format to our author display, and the one in the middle uses the format < a href = "https://orcid.org/trademark-and-id-display-guidelines" > recommended by ORCID’ s branding guidelines< / a > < / li >
< li > Also, I realized that the Academicons font icon set we’ re using includes an ORCID badge so we don’ t need to use the PNG image anymore< / li >
2018-02-18 11:02:54 +01:00
< li > Run system updates on DSpace Test (linode02) and reboot the server< / li >
2019-11-28 16:30:45 +01:00
< li > Looking back at the system errors on 2018-02-15, I wonder what the fuck caused this:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > $ wc -l dspace.log.2018-02-1{0..8}
2019-11-28 16:30:45 +01:00
383483 dspace.log.2018-02-10
275022 dspace.log.2018-02-11
249557 dspace.log.2018-02-12
280142 dspace.log.2018-02-13
615119 dspace.log.2018-02-14
4388259 dspace.log.2018-02-15
243496 dspace.log.2018-02-16
209186 dspace.log.2018-02-17
167432 dspace.log.2018-02-18
< / code > < / pre > < ul >
< li > From an average of a few hundred thousand to over four million lines in DSpace log?< / li >
2020-01-27 15:20:44 +01:00
< li > Using grep’ s < code > -B1< / code > I can see the line before the heap space error, which has the time, ie:< / li >
2019-11-28 16:30:45 +01:00
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > 2018-02-15 16:02:12,748 ERROR org.dspace.app.xmlui.cocoon.DSpaceCocoonServletFilter @ Serious Error Occurred Processing Request!
2018-02-18 16:41:05 +01:00
org.springframework.web.util.NestedServletException: Handler processing failed; nested exception is java.lang.OutOfMemoryError: Java heap space
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > So these errors happened at hours 16, 18, 19, and 20< / li >
2020-01-27 15:20:44 +01:00
< li > Let’ s see what was going on in nginx then:< / li >
2019-11-28 16:30:45 +01:00
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > # zcat --force /var/log/nginx/*.log.{3,4}.gz | wc -l
2018-02-18 16:41:05 +01:00
168571
2022-03-04 13:30:06 +01:00
# zcat --force /var/log/nginx/*.log.{3,4}.gz | grep -E " 15/Feb/2018:(16|18|19|20)" | wc -l
2018-02-18 16:41:05 +01:00
8188
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Only 8,000 requests during those four hours, out of 170,000 the whole day!< / li >
< li > And the usage of XMLUI, REST, and OAI looks SUPER boring:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > # zcat --force /var/log/nginx/*.log.{3,4}.gz | grep -E " 15/Feb/2018:(16|18|19|20)" | awk ' {print $1}' | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
111 95.108.181.88
158 45.5.184.221
201 104.196.152.243
205 68.180.228.157
236 40.77.167.131
253 207.46.13.159
293 207.46.13.59
296 63.143.42.242
303 207.46.13.157
416 63.143.42.244
< / code > < / pre > < ul >
< li > 63.143.42.244 is Uptime Robot, and 207.46.x.x is Bing!< / li >
< li > The DSpace sessions, PostgreSQL connections, and JVM memory all look normal< / li >
< li > I see a lot of AccessShareLock on February 15th… ?< / li >
2018-02-18 16:41:05 +01:00
< / ul >
2019-11-28 16:30:45 +01:00
< p > < img src = "/cgspace-notes/2018/02/postgresql-locks-week.png" alt = "PostgreSQL locks" > < / p >
2018-02-18 16:41:05 +01:00
< ul >
< li > I have no idea what caused this crash< / li >
< li > In other news, I adjusted the ORCID badge size on the XMLUI item display and sent it back to Peter for feedback< / li >
2018-02-18 10:21:16 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-19" > 2018-02-19< / h2 >
2018-02-19 17:40:43 +01:00
< ul >
2019-11-28 16:30:45 +01:00
< li > Combined list of CGIAR author ORCID iDs is up to 1,500:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-id.xml ORCID_ID_CIAT_IITA_IWMI-csv.csv CGcenter_ORCID_ID_combined.csv | grep -oE ' [A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort | uniq | wc -l
2018-02-19 17:40:43 +01:00
1571
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > I updated my < code > resolve-orcids-from-solr.py< / code > script to be able to resolve ORCID identifiers from a text file so I renamed it to < code > resolve-orcids.py< / code > < / li >
< li > Also, I updated it so it uses several new options:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > $ ./resolve-orcids.py -i input.txt -o output.txt
2018-02-19 17:40:43 +01:00
$ cat output.txt
Ali Ramadhan: 0000-0001-5019-1368
Ahmad Maryudi: 0000-0001-5051-7217
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > I was running this on the new list of 1571 and found an error:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > Looking up the name associated with ORCID iD: 0000-0001-9634-1958
2018-02-19 22:15:24 +01:00
Traceback (most recent call last):
2022-03-04 13:30:06 +01:00
File " ./resolve-orcids.py" , line 111, in < module>
2019-11-28 16:30:45 +01:00
read_identifiers_from_file()
2022-03-04 13:30:06 +01:00
File " ./resolve-orcids.py" , line 37, in read_identifiers_from_file
2019-11-28 16:30:45 +01:00
resolve_orcid_identifiers(orcids)
2022-03-04 13:30:06 +01:00
File " ./resolve-orcids.py" , line 65, in resolve_orcid_identifiers
family_name = data[' name' ][' family-name' ][' value' ]
TypeError: ' NoneType' object is not subscriptable
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > According to ORCID that identifier’ s family-name is null so that sucks< / li >
2019-11-28 16:30:45 +01:00
< li > I fixed the script so that it checks if the family name is null< / li >
< li > Now another:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > Looking up the name associated with ORCID iD: 0000-0002-1300-3636
2018-02-19 22:15:24 +01:00
Traceback (most recent call last):
2022-03-04 13:30:06 +01:00
File " ./resolve-orcids.py" , line 117, in < module>
2019-11-28 16:30:45 +01:00
read_identifiers_from_file()
2022-03-04 13:30:06 +01:00
File " ./resolve-orcids.py" , line 37, in read_identifiers_from_file
2019-11-28 16:30:45 +01:00
resolve_orcid_identifiers(orcids)
2022-03-04 13:30:06 +01:00
File " ./resolve-orcids.py" , line 65, in resolve_orcid_identifiers
if data[' name' ][' given-names' ]:
TypeError: ' NoneType' object is not subscriptable
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > According to ORCID that identifier’ s entire name block is null!< / li >
2018-02-19 22:15:24 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-20" > 2018-02-20< / h2 >
2018-02-20 08:52:39 +01:00
< ul >
< li > Send Abenet an email about getting a purchase requisition for a new DSpace Test server on Linode< / li >
2020-01-27 15:20:44 +01:00
< li > Discuss some of the issues with null values and poor-quality names in some ORCID identifiers with Abenet and I think we’ ll now only use ORCID iDs that have been sent to use from partners, not those extracted via keyword searches on orcid.org< / li >
< li > This should be the version we use (the existing controlled vocabulary generated from CGSpace’ s Solr authority core plus the IDs sent to us so far by partners):< / li >
2019-11-28 16:30:45 +01:00
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ cat ~/src/git/DSpace/dspace/config/controlled-vocabularies/cg-creator-id.xml ORCID_ID_CIAT_IITA_IWMI.csv | grep -oE ' [A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort | uniq > 2018-02-20-combined.txt
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > I updated the < code > resolve-orcids.py< / code > to use the “ credit-name” if it exists in a profile, falling back to “ given-names” + “ family-name” < / li >
< li > Also, I added color coded output to the debug messages and added a “ quiet” mode that supresses the normal behavior of printing results to the screen< / li >
2020-01-27 15:20:44 +01:00
< li > I’ m using this as the test input for < code > resolve-orcids.py< / code > :< / li >
2019-11-28 16:30:45 +01:00
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > $ cat orcid-test-values.txt
2022-03-04 13:30:06 +01:00
# valid identifier with ' given-names' and ' family-name'
2018-02-20 13:47:28 +01:00
0000-0001-5019-1368
# duplicate identifier
0000-0001-5019-1368
# invalid identifier
0000-0001-9634-19580
2022-03-04 13:30:06 +01:00
# has a ' credit-name' value we should prefer
2018-02-20 13:47:28 +01:00
0000-0002-1735-7458
2022-03-04 13:30:06 +01:00
# has a blank ' credit-name' value
2018-02-20 13:47:28 +01:00
0000-0001-5199-5528
2022-03-04 13:30:06 +01:00
# has a null ' name' object
2018-02-20 13:47:28 +01:00
0000-0002-1300-3636
2022-03-04 13:30:06 +01:00
# has a null ' family-name' value
2018-02-20 13:47:28 +01:00
0000-0001-9634-1958
# missing ORCID identifier
0000-0003-4221-3214
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Help debug issues with Altmetric badges again, it looks like Altmetric is all kinds of fucked up< / li >
< li > Last week I pointed out that they were tracking Handles from our test server< / li >
< li > Now, their API is responding with content that is marked as content-type JSON but is not valid JSON< / li >
< li > For example, this item: < a href = "https://cgspace.cgiar.org/handle/10568/83320" > https://cgspace.cgiar.org/handle/10568/83320< / a > < / li >
< li > The Altmetric JavaScript builds the following API call: < a href = "https://api.altmetric.com/v1/handle/10568/83320?callback=_altmetric.embed_callback&domain=cgspace.cgiar.org&key=3c130976ca2b8f2e88f8377633751ba1&cache_until=13-20" > https://api.altmetric.com/v1/handle/10568/83320?callback=_altmetric.embed_callback& domain=cgspace.cgiar.org& key=3c130976ca2b8f2e88f8377633751ba1& cache_until=13-20< / a > < / li >
< li > The response body is < em > not< / em > JSON< / li >
< li > To contrast, the following bare API call without query parameters is valid JSON: < a href = "https://api.altmetric.com/v1/handle/10568/83320" > https://api.altmetric.com/v1/handle/10568/83320< / a > < / li >
2020-01-27 15:20:44 +01:00
< li > I told them that it’ s their JavaScript that is fucked up< / li >
2019-11-28 16:30:45 +01:00
< li > Remove CPWF project number and Humidtropics subject from submission form (< a href = "https://github.com/alanorth/DSpace/pull/3" > #3< / a > )< / li >
< li > I accidentally merged it into my own repository, oops< / li >
2018-02-20 08:52:39 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-22" > 2018-02-22< / h2 >
2018-02-22 17:10:11 +01:00
< ul >
2020-01-27 15:20:44 +01:00
< li > CGSpace was apparently down today around 13:00 server time and I didn’ t get any emails on my phone, but saw them later on the computer< / li >
2018-02-22 17:10:11 +01:00
< li > It looks like Sisay restarted Tomcat because I was offline< / li >
2019-11-28 16:30:45 +01:00
< li > There was absolutely nothing interesting going on at 13:00 on the server, WTF?< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > # cat /var/log/nginx/*.log | grep -E " 22/Feb/2018:13" | awk ' {print $1}' | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
55 192.99.39.235
60 207.46.13.26
62 40.77.167.38
65 207.46.13.23
103 41.57.108.208
120 104.196.152.243
133 104.154.216.0
145 68.180.228.117
159 54.92.197.82
231 5.9.6.51
< / code > < / pre > < ul >
< li > Otherwise there was pretty normal traffic the rest of the day:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > # zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E " 22/Feb/2018" | awk ' {print $1}' | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
839 216.244.66.245
1074 68.180.228.117
1114 157.55.39.100
1162 207.46.13.26
1178 207.46.13.23
2749 104.196.152.243
3109 50.116.102.77
4199 70.32.83.92
5208 5.9.6.51
8686 45.5.184.196
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > So I don’ t see any definite cause for this crash, I see a shit ton of abandoned PostgreSQL connections today around 1PM!< / li >
2019-11-28 16:30:45 +01:00
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > # grep -c ' org.apache.tomcat.jdbc.pool.ConnectionPool abandon' /var/log/tomcat7/catalina.out
2018-02-22 18:20:18 +01:00
729
2022-03-04 13:30:06 +01:00
# grep ' Feb 22, 2018 1' /var/log/tomcat7/catalina.out | grep -c ' org.apache.tomcat.jdbc.pool.ConnectionPool abandon'
2018-02-22 18:20:18 +01:00
519
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > I think the < code > removeAbandonedTimeout< / code > might still be too low (I increased it from 60 to 90 seconds last week)< / li >
< li > Abandoned connections is not a cause but a symptom, though perhaps something more like a few minutes is better?< / li >
< li > Also, while looking at the logs I see some new bot:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.4.2661.102 Safari/537.36; 360Spider
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > It seems to re-use its user agent but makes tons of useless requests and I wonder if I should add “ .< em > spider.< / em > ” to the Tomcat Crawler Session Manager valve?< / li >
2018-02-22 17:10:11 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-23" > 2018-02-23< / h2 >
2018-02-23 22:28:56 +01:00
< ul >
< li > Atmire got back to us with a quote about their DSpace 5.8 upgrade< / li >
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-25" > 2018-02-25< / h2 >
2018-02-25 10:23:54 +01:00
< ul >
< li > A few days ago Abenet sent me the list of ORCID iDs from CCAFS< / li >
2019-11-28 16:30:45 +01:00
< li > We currently have 988 unique identifiers:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ cat dspace/config/controlled-vocabularies/cg-creator-id.xml | grep -oE ' [A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort | uniq | wc -l
2018-02-25 10:23:54 +01:00
988
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > After adding the ones from CCAFS we now have 1004:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ cat dspace/config/controlled-vocabularies/cg-creator-id.xml /tmp/ccafs | grep -oE ' [A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}' | sort | uniq | wc -l
2018-02-25 10:23:54 +01:00
1004
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > I will add them to DSpace Test but Abenet says she’ s still waiting to set us ILRI’ s list< / li >
2019-11-28 16:30:45 +01:00
< li > I will tell her that we should proceed on sharing our work on DSpace Test with the partners this week anyways and we can update the list later< / li >
< li > While regenerating the names for these ORCID identifiers I saw < a href = "https://pub.orcid.org/v2.1/0000-0002-2614-426X/person" > one that has a weird value for its names< / a > :< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > Looking up the names associated with ORCID iD: 0000-0002-2614-426X
2018-02-25 10:23:54 +01:00
Given Names Deactivated Family Name Deactivated: 0000-0002-2614-426X
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > I don’ t know if the user accidentally entered this as their name or if that’ s how ORCID behaves when the name is private?< / li >
2019-11-28 16:30:45 +01:00
< li > I will remove that one from our list for now< / li >
< li > Remove Dryland Systems subject from submission form because that CRP closed two years ago (< a href = "https://github.com/ilri/DSpace/pull/355" > #355< / a > )< / li >
< li > Run all system updates on DSpace Test< / li >
< li > Email ICT to ask how to proceed with the OCS proforma issue for the new DSpace Test server on Linode< / li >
< li > Thinking about how to preserve ORCID identifiers attached to existing items in CGSpace< / li >
< li > We have over 60,000 unique author + authority combinations on CGSpace:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > dspace=# select count(distinct (text_value, authority)) from metadatavalue where resource_type_id=2 and metadata_field_id=3;
2019-11-28 16:30:45 +01:00
count
2018-02-25 16:25:41 +01:00
-------
2019-11-28 16:30:45 +01:00
62464
2018-02-25 16:25:41 +01:00
(1 row)
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > I know from earlier this month that there are only 624 unique ORCID identifiers in the Solr authority core, so it’ s way easier to just fetch the unique ORCID iDs from Solr and then go back to PostgreSQL and do the metadata mapping that way< / li >
2019-11-28 16:30:45 +01:00
< li > The query in Solr would simply be < code > orcid_id:*< / code > < / li >
< li > Assuming I know that authority record with < code > id:d7ef744b-bbd4-4171-b449-00e37e1b776f< / code > , then I could query PostgreSQL for all metadata records using that authority:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > dspace=# select * from metadatavalue where resource_type_id=2 and authority=' d7ef744b-bbd4-4171-b449-00e37e1b776f' ;
2019-11-28 16:30:45 +01:00
metadata_value_id | resource_id | metadata_field_id | text_value | text_lang | place | authority | confidence | resource_type_id
2018-02-25 16:25:41 +01:00
-------------------+-------------+-------------------+---------------------------+-----------+-------+--------------------------------------+------------+------------------
2019-11-28 16:30:45 +01:00
2726830 | 77710 | 3 | Rodríguez Chalarca, Jairo | | 2 | d7ef744b-bbd4-4171-b449-00e37e1b776f | 600 | 2
2018-02-25 16:25:41 +01:00
(1 row)
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Then I suppose I can use the < code > resource_id< / code > to identify the item?< / li >
< li > Actually, < code > resource_id< / code > is the same id we use in CSV, so I could simply build something like this for a metadata import!< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > id,cg.creator.id
2018-02-25 16:25:41 +01:00
93848,Alan S. Orth: 0000-0002-1735-7458||Peter G. Ballantyne: 0000-0001-9346-2893
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > I just discovered that < a href = "https://requests-cache.readthedocs.io" > requests-cache< / a > can transparently cache HTTP requests< / li >
< li > Running < code > resolve-orcids.py< / code > with my test input takes 10.5 seconds the first time, and then 3.0 seconds the second time!< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > $ time ./resolve-orcids.py -i orcid-test-values.txt -o /tmp/orcid-names
2018-02-25 16:25:41 +01:00
Ali Ramadhan: 0000-0001-5019-1368
Alan S. Orth: 0000-0002-1735-7458
Ibrahim Mohammed: 0000-0001-5199-5528
Nor Azwadi: 0000-0001-9634-1958
./resolve-orcids.py -i orcid-test-values.txt -o /tmp/orcid-names 0.32s user 0.07s system 3% cpu 10.530 total
$ time ./resolve-orcids.py -i orcid-test-values.txt -o /tmp/orcid-names
Ali Ramadhan: 0000-0001-5019-1368
Alan S. Orth: 0000-0002-1735-7458
Ibrahim Mohammed: 0000-0001-5199-5528
Nor Azwadi: 0000-0001-9634-1958
./resolve-orcids.py -i orcid-test-values.txt -o /tmp/orcid-names 0.23s user 0.05s system 8% cpu 3.046 total
2019-12-17 13:49:24 +01:00
< / code > < / pre > < h2 id = "2018-02-26" > 2018-02-26< / h2 >
2018-02-26 15:41:28 +01:00
< ul >
< li > Peter is having problems with “ Socket closed” on his submissions page again< / li >
< li > He says his personal account loads much faster than his CGIAR account, which could be because the CGIAR account has potentially thousands of submissions over the last few years< / li >
2020-01-27 15:20:44 +01:00
< li > I don’ t know why it would take so long, but this logic kinda makes sense< / li >
2018-02-26 17:12:27 +01:00
< li > I think I should increase the < code > removeAbandonedTimeout< / code > from 90 to something like 180 and continue observing< / li >
< li > I also reduced the timeout for the API pool back to 60 because those interfaces are only used by bots< / li >
2018-02-26 15:41:28 +01:00
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-27" > 2018-02-27< / h2 >
2018-02-27 16:34:48 +01:00
< ul >
< li > Peter is still having problems with “ Socket closed” on his submissions page< / li >
2020-01-27 15:20:44 +01:00
< li > I have disabled < code > removeAbandoned< / code > for now because that’ s the only thing I changed in the last few weeks since he started having issues< / li >
2018-02-27 16:34:48 +01:00
< li > I think the real line of logic to follow here is why the submissions page is so slow for him (presumably because of loading all his submissions?)< / li >
< li > I need to see which SQL queries are run during that time< / li >
2019-11-28 16:30:45 +01:00
< li > And only a few hours after I disabled the < code > removeAbandoned< / code > thing CGSpace went down and lo and behold, there were 264 connections, most of which were idle:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ psql -c ' select * from pg_stat_activity' | grep -o -E ' (dspaceWeb|dspaceApi|dspaceCli)' | sort | uniq -c
2019-11-28 16:30:45 +01:00
5 dspaceApi
279 dspaceWeb
2022-03-04 13:30:06 +01:00
$ psql -c ' select * from pg_stat_activity' | grep dspaceWeb | grep -c " idle in transaction"
2018-02-27 16:34:48 +01:00
218
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
2020-01-27 15:20:44 +01:00
< li > So I’ m re-enabling the < code > removeAbandoned< / code > setting< / li >
2019-11-28 16:30:45 +01:00
< li > I grabbed a snapshot of the active connections in < code > pg_stat_activity< / code > for all queries running longer than 2 minutes:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > dspace=# \copy (SELECT now() - query_start as " runtime" , application_name, usename, datname, waiting, state, query
2019-11-28 16:30:45 +01:00
FROM pg_stat_activity
2022-03-04 13:30:06 +01:00
WHERE now() - query_start > ' 2 minutes' ::interval
2019-11-28 16:30:45 +01:00
ORDER BY runtime DESC) to /tmp/2018-02-27-postgresql.txt
2018-02-27 16:34:48 +01:00
COPY 263
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > 100 of these idle in transaction connections are the following query:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > SELECT * FROM resourcepolicy WHERE resource_type_id= $1 AND resource_id= $2 AND action_id= $3
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > … but according to the < a href = "https://www.postgresql.org/docs/9.5/static/view-pg-locks.html" > pg_locks documentation< / a > I should have done this to correlate the locks with the activity:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid;
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Tom Desair from Atmire shared some extra JDBC pool parameters that might be useful on my thread on the dspace-tech mailing list:
2018-02-27 17:50:30 +01:00
< ul >
< li > abandonWhenPercentageFull: Only start cleaning up abandoned connections if the pool is used for more than X %.< / li >
2020-09-16 12:47:13 +02:00
< li > jdbcInterceptors=‘ ResetAbandonedTimer’ : Make sure the “ abondoned” timer is reset every time there is activity on a connection< / li >
2018-02-27 16:34:48 +01:00
< / ul >
2019-11-28 16:30:45 +01:00
< / li >
< li > I will try with < code > abandonWhenPercentageFull='50'< / code > < / li >
< li > Also there are some indexes proposed in < a href = "https://jira.duraspace.org/browse/DS-3636" > DS-3636< / a > that he urged me to try< / li >
< li > Finally finished the < a href = "https://gist.github.com/alanorth/6d7489b50f06a6a1f04ae1c8b899cb6e" > orcid-authority-to-item.py< / a > script!< / li >
< li > It successfully mapped 2600 ORCID identifiers to items in my tests< / li >
< li > I will run it on DSpace Test< / li >
< / ul >
2019-12-17 13:49:24 +01:00
< h2 id = "2018-02-28" > 2018-02-28< / h2 >
2018-02-28 16:30:16 +01:00
< ul >
2020-01-27 15:20:44 +01:00
< li > CGSpace crashed today, the first HTTP 499 in nginx’ s access.log was around 09:12< / li >
< li > There’ s nothing interesting going on in nginx’ s logs around that time:< / li >
2019-11-28 16:30:45 +01:00
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > # zcat --force /var/log/nginx/*.log /var/log/nginx/*.log.1 | grep -E " 28/Feb/2018:09:" | awk ' {print $1}' | sort | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
65 197.210.168.174
74 213.55.99.121
74 66.249.66.90
86 41.204.190.40
102 130.225.98.207
108 192.0.89.192
112 157.55.39.218
129 207.46.13.21
131 207.46.13.115
135 207.46.13.101
< / code > < / pre > < ul >
< li > Looking in dspace.log-2018-02-28 I see this, though:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > 2018-02-28 09:19:29,692 ERROR org.dspace.app.xmlui.cocoon.DSpaceCocoonServletFilter @ Serious Error Occurred Processing Request!
2018-02-28 16:30:16 +01:00
org.springframework.web.util.NestedServletException: Handler processing failed; nested exception is java.lang.OutOfMemoryError: Java heap space
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Memory issues seem to be common this month:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ grep -c ' nested exception is java.lang.OutOfMemoryError: Java heap space' dspace.log.2018-02-*
2018-02-28 16:30:16 +01:00
dspace.log.2018-02-01:0
dspace.log.2018-02-02:0
dspace.log.2018-02-03:0
dspace.log.2018-02-04:0
dspace.log.2018-02-05:0
dspace.log.2018-02-06:0
dspace.log.2018-02-07:0
dspace.log.2018-02-08:0
dspace.log.2018-02-09:0
dspace.log.2018-02-10:0
dspace.log.2018-02-11:0
dspace.log.2018-02-12:0
dspace.log.2018-02-13:0
dspace.log.2018-02-14:0
dspace.log.2018-02-15:10
dspace.log.2018-02-16:0
dspace.log.2018-02-17:0
dspace.log.2018-02-18:0
dspace.log.2018-02-19:0
dspace.log.2018-02-20:0
dspace.log.2018-02-21:0
dspace.log.2018-02-22:0
dspace.log.2018-02-23:0
dspace.log.2018-02-24:0
dspace.log.2018-02-25:0
dspace.log.2018-02-26:0
dspace.log.2018-02-27:6
dspace.log.2018-02-28:1
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > Top ten users by session during the first twenty minutes of 9AM:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > $ grep -E ' 2018-02-28 09:(0|1)' dspace.log.2018-02-28 | grep -o -E ' session_id=[A-Z0-9]{32}' | sort -n | uniq -c | sort -n | tail -n 10
2019-11-28 16:30:45 +01:00
18 session_id=F2DFF64D3D707CD66AE3A873CEC80C49
19 session_id=92E61C64A79F0812BE62A3882DA8F4BA
21 session_id=57417F5CB2F9E3871E609CEEBF4E001F
25 session_id=C3CD265AB7AA51A49606C57C069A902A
26 session_id=E395549F081BA3D7A80F174AE6528750
26 session_id=FEE38CF9760E787754E4480069F11CEC
33 session_id=C45C2359AE5CD115FABE997179E35257
38 session_id=1E9834E918A550C5CD480076BC1B73A4
40 session_id=8100883DAD00666A655AE8EC571C95AE
66 session_id=01D9932D6E85E90C2BA9FF5563A76D03
< / code > < / pre > < ul >
< li > According to the log 01D9932D6E85E90C2BA9FF5563A76D03 is an ILRI editor, doing lots of updating and editing of items< / li >
< li > 8100883DAD00666A655AE8EC571C95AE is some Indian IP address< / li >
< li > 1E9834E918A550C5CD480076BC1B73A4 looks to be a session shared by the bots< / li >
2020-01-27 15:20:44 +01:00
< li > So maybe it was due to the editor’ s uploading of files, perhaps something that was too big or?< / li >
< li > I think I’ ll increase the JVM heap size on CGSpace from 6144m to 8192m because I’ m sick of this random crashing shit and the server has memory and I’ d rather eliminate this so I can get back to solving PostgreSQL issues and doing other real work< / li >
2019-11-28 16:30:45 +01:00
< li > Run the few corrections from earlier this month for sponsor on CGSpace:< / li >
< / ul >
2022-03-04 13:30:06 +01:00
< pre tabindex = "0" > < code > cgspace=# update metadatavalue set text_value=' United States Agency for International Development' where resource_type_id=2 and metadata_field_id=29 and text_value like ' %U.S. Agency for International Development%' ;
2018-02-28 16:30:16 +01:00
UPDATE 3
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > I finally got a CGIAR account so I logged into CGSpace with it and tried to delete my old unfinished submissions (22 of them)< / li >
< li > Eventually it succeeded, but it took about five minutes and I noticed LOTS of locks happening with this query:< / li >
< / ul >
2021-09-13 15:21:16 +02:00
< pre tabindex = "0" > < code > dspace=# \copy (SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid) to /tmp/locks-aorth.txt;
2019-11-28 16:30:45 +01:00
< / code > < / pre > < ul >
< li > I took a few snapshots during the process and noticed 500, 800, and even 2000 locks at certain times during the process< / li >
< li > Afterwards I looked a few times and saw only 150 or 200 locks< / li >
< li > On the test server, with the < a href = "https://jira.duraspace.org/browse/DS-3636" > PostgreSQL indexes from DS-3636< / a > applied, it finished instantly< / li >
< li > Run system updates on DSpace Test and reboot the server< / li >
2018-02-28 16:30:16 +01:00
< / ul >
2018-02-11 17:28:23 +01:00
< / article >
< / div > <!-- /.blog - main -->
< aside class = "col-sm-3 ml-auto blog-sidebar" >
< section class = "sidebar-module" >
< h4 > Recent Posts< / h4 >
< ol class = "list-unstyled" >
2022-03-01 15:48:40 +01:00
< li > < a href = "/cgspace-notes/2022-03/" > March, 2022< / a > < / li >
2022-02-10 18:35:40 +01:00
< li > < a href = "/cgspace-notes/2022-02/" > February, 2022< / a > < / li >
2022-01-01 14:21:47 +01:00
< li > < a href = "/cgspace-notes/2022-01/" > January, 2022< / a > < / li >
2021-12-03 11:58:43 +01:00
< li > < a href = "/cgspace-notes/2021-12/" > December, 2021< / a > < / li >
2021-11-01 09:49:21 +01:00
< li > < a href = "/cgspace-notes/2021-11/" > November, 2021< / a > < / li >
2018-02-11 17:28:23 +01:00
< / ol >
< / section >
< section class = "sidebar-module" >
< h4 > Links< / h4 >
< ol class = "list-unstyled" >
< li > < a href = "https://cgspace.cgiar.org" > CGSpace< / a > < / li >
< li > < a href = "https://dspacetest.cgiar.org" > DSpace Test< / a > < / li >
< li > < a href = "https://github.com/ilri/DSpace" > CGSpace @ GitHub< / a > < / li >
< / ol >
< / section >
< / aside >
< / div > <!-- /.row -->
< / div > <!-- /.container -->
< footer class = "blog-footer" >
2019-10-11 10:19:42 +02:00
< p dir = "auto" >
2018-02-11 17:28:23 +01:00
Blog template created by < a href = "https://twitter.com/mdo" > @mdo< / a > , ported to Hugo by < a href = 'https://twitter.com/mralanorth' > @mralanorth< / a > .
< / p >
< p >
< a href = "#" > Back to top< / a >
< / p >
< / footer >
< / body >
< / html >