Project

General

Profile

Actions

Support #10094

closed

Check the output of Twitter crawlers

Added by Salvatore Minutoli over 7 years ago. Updated over 7 years ago.

Status:
Closed
Priority:
Normal
Assignee:
_InfraScience Systems Engineer
Category:
Application
Start date:
Oct 27, 2017
Due date:
% Done:

100%

Estimated time:
Infrastructure:
Production

Description

Currently the twittermonitor1.d4science.org VM hosts some PHP scripts that actually implement the Twitter crawling. Every running crawler produces two files: /home/gcube/twmon/data/twmon_ID.log and /home/gcube/twmon/data/twmon_ID.res . It would be useful, at least during the first period, to be abel to check their contents. Is it possible to download them when needed for test purposes? I can change the directory to which the files are written, if needed.

Actions #1

Updated by Andrea Dell'Amico over 7 years ago

You can put them inside /home/gcube/tomcat/logs and you will be able to see them at http://twittermon1.d4science.org/gcube-logs/ like the other ones.

Actions #2

Updated by Salvatore Minutoli over 7 years ago

Ok, I will change the output directory in my plugin and check if it works correctly.

Actions #3

Updated by Salvatore Minutoli over 7 years ago

  • Status changed from New to Closed

I changed the output directory. The plugin can write to the directory without any problem and I am able to see the results and the logs in the suggested directory.
After an initial period of time, the results files will be moved back to an internal directory.

Actions

Also available in: Atom PDF

Add picture from clipboard (Maximum size: 8.91 MB)