Task #12921
closedRStudio - Does not work correctly on some nodes
100%
Description
RStudio does not work correctly on some nodes of the GARR:
http://ip-90-147-167-3.ct1.garrservices.it/r-connector/gcube/service/connect?gcube-token=xxxxxxxxxxxxxx Error (400) : this resource cannot process this request because it is malformed Stacktrace: org.gcube.smartgears.handlers.application.request.RequestException: RConnector cannot be called in scope /d4science.research-infrastructures.eu/FARM/GRSF_Admin at org.gcube.smartgears.handlers.application.request.RequestError.toException(RequestError.java:110) at org.gcube.smartgears.handlers.application.request.RequestError.fire(RequestError.java:94) at org.gcube.smartgears.handlers.application.request.RequestValidator.validateScopeCall(RequestValidator.java:98)
Please @roberto.cirillo@isti.cnr.it, can you check if it's a SmartGear status problem?
Thanks
Updated by Roberto Cirillo over 6 years ago
The problem here seems to be the same verified yesterday on some dataminer instances @garr . The call is rejected because the scope is not present on the container state:
2018-11-21 12:03:54,664 [catalina-exec-9] WARN RequestValidator: rejecting call to RConnector in invalid context /d4science.research-infrastructures.eu/FARM/GRSF_Admin, allowed context are [/d4science.research-infrastructures.eu/D4OS, /d4science.research-infrastructures.eu/OpenAIRE]
In this case the container state contains only the OpenAIRE and D4OS VOs. It doesn't contains the gCubeApps scope and it is very strange because the gCubeApps VO is a very old VO.
In addition, I've notice that the scopes on the container file are duplicates, I've checked more than one VM with the rstudio service @garr and I've found the same problem.
@andrea.dellamico@isti.cnr.it Is it possible that the playbook has been run without the variable smartgears_merge_scopes set to false? If this is the case, a new run is needed on all the rstudio instances @garr.
I've also notice that the rstudio instance deployed @CNR don't have this problem: the scope aren't duplicated.
Updated by Andrea Dell'Amico over 6 years ago
The playbook run with the same parameters everywhere, I don't really know why some scopes are discarded. Isn't there anything in the logs at the time the servers where restarted?
Updated by Roberto Cirillo over 6 years ago
There are two problem on garr vms:
The first one is the state, verified on ip-90-147-167-3;
The second one is the scope duplicated: verified on all the garr instances.
For the first problem we haven't any log at restart time, we have only the log reported in my previous comment. The log above appears only when a call is executed on the missing scope .
For the second problem we have the following log at restart time:
2018-11-19 16:16:00,866 [localhost-startStop-1] WARN ContainerManager: the token /d4science.research-infrastructures.eu cannot be used, another token with the same context {} found 2018-11-19 16:16:00,879 [localhost-startStop-1] WARN ContainerManager: the token /d4science.research-infrastructures.eu/FARM cannot be used, another token with the same context {} found 2018-11-19 16:16:00,879 [localhost-startStop-1] WARN ContainerManager: the token /d4science.research-infrastructures.eu/SoBigData cannot be used, another token with the same context {} found 2018-11-19 16:16:00,879 [localhost-startStop-1] WARN ContainerManager: the token /d4science.research-infrastructures.eu/SmartArea cannot be used, another token with the same context {} found 2018-11-19 16:16:00,880 [localhost-startStop-1] WARN ContainerManager: the token /d4science.research-infrastructures.eu/gCubeApps cannot be used, another token with the same context {} found 2018-11-19 16:16:00,880 [localhost-startStop-1] WARN ContainerManager: the token /d4science.research-infrastructures.eu/D4Research cannot be used, another token with the same context {} found 2018-11-19 16:16:00,880 [localhost-startStop-1] WARN ContainerManager: the token /d4science.research-infrastructures.eu/ParthenosVO cannot be used, another token with the same context {} found
I think the second problem could be resolved running again the playbook without the merge of the scopes. Are you agree on that?
For the first problem we need further analysis.
Updated by Roberto Cirillo over 6 years ago
- Status changed from New to In Progress
I'm going to run the playbook first on ip-90-147-167-3 if all will be successful, I'm going to run it on every garr instances.
Updated by Andrea Dell'Amico over 6 years ago
Fine by me. To cleanup the duplicate scopes entries, you can run the playbook using the token and disabiling the merge of the existing scopes.
Updated by Roberto Cirillo over 6 years ago
Not so easy, I've the following error when run the playbook:
fatal: [ip-90-147-167-3.ct1.garrservices.it]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"ip-90-147-167-3.ct1.garrservices.it\". Make sure this host can be reached over ssh", "unreachable": true}
Any idea?
Updated by Andrea Dell'Amico over 6 years ago
It seems something transient, I can access and I see you have a shell on that VM.
Updated by Roberto Cirillo over 6 years ago
It cannot be a ssh problem. I'm able to access in ssh as root and as gcube but when I run the playbook, I've that error. I've tried several times with the same result.
Updated by Andrea Dell'Amico over 6 years ago
I just run the playbook successfully on all the GARR rstudio servers, this way:
./run.sh rstudio.yml -i inventory/hosts.production -l rstudio_garr -t smartgears_conf -e 'gcube_admin_token=<my_token>' -e 'smartgears_merge_scopes=False'
Can you check if it worked?
Updated by Roberto Cirillo over 6 years ago
- Assignee changed from Roberto Cirillo to Giancarlo Panichi
Now the scopes are ok. Please @g.panichi@isti.cnr.it Could you repeat the test on ip-90-147-167-3.ct1.garrservices.it?
Updated by Giancarlo Panichi over 6 years ago
Ok, now it works on ip-90-147-167-3.ct1.garrservices.it for me.
Updated by Roberto Cirillo over 6 years ago
- Status changed from In Progress to Closed
- % Done changed from 0 to 100
OK. The state has been cleaned properly. I'm going to close this ticket.