This is an old revision of the document!
Daily tasks and best practices
This section describes the usual proceedings of a biomed support duty shift. Subsequent sections give further details as to specific tasks.
Starting a shift
When your shift starts, get the VO status summary (salient tickets, on-going issues) from the previous team on shift.During the shift
The page provides a description of daily tasks and best practices. Those recommendations aim at organizing the handling of technical issues across shifters and at providing a coherent interface to ensure a successful communication. In particular, here are the Daily tasks at a glance:- Follow-up on open tickets, and verify solved tickets
- Monitor critical services (VOMS, LFC).
- Check ARGO alarms concerning SEs, CEs.
- Deal with full SEs, and resource decommissioning
- Report detected issues concerning ARGO box by assigning a team ticket to the dedicated ARGO support unit.
Before submitting GGUS Team tickets, have a careful look at the advices about ticket submission.
Ending a shift
At the end of the shift, please report a VO status summary to next team on shift to help seamless take-over.The actions below that should be performed on a daily basis as much as possible.
Follow-up on open tickets
The follow-up of open issues is as important as the monitoring of resources.
At least once a week, we have to check on open tickets and:
- send a reminder in case there has been no progress,
- answer questions or take appropriate actions in case admins expect some inputs from us.
Check the status of open team tickets (sorted by last update)
Check the status of VOSupport tickets
These are notified to the biomed technical shift list, but it is a good practise to check on them regularly. Note that VOSupport are team tickets when we submitted them, they are not team tickets when they were submitted by site admins or users.
Verify solved tickets
Solved is not closed: validate the ticket once in status solved, or re-open it if the problem persists.
Team and VOSupport tickets solved/verified/unsolved during the last month
Identification of issues
VOMS server
The proxy certificate creation should work:
voms-proxy-init -voms biomed
The VOMS administration interface should be available. From a UI, run the command:
voms-admin --vo=biomed --host voms-biomed.in2p3.fr --port 8443 list-cas
LFC server
Command “time lfc-ls /grid
” should return in less than 30 seconds.
Monitoring SEs
Identify the problems
SRM probes used by ARGO box - https://github.com/EGI-Foundation/nagios-plugins-srm - based on the gfal2 library for the storage operations (gfal-copy, etc) - queries the BDII service in order to build the Storage URL to test given the host-name and the VO name - a X509 valid proxy certificate is needed to execute the probe (configured via X509_USER_PROXY variable).
Reminder: do NOT submit a ticket if the service is in downtime or it is not in proper production status: see on VAPOR the supporting resources or faulty resources.
Reproduce the problem
2. Manual SRM testing (copy file to SE) - from the biomed-ui.fedcloud.fr VM, where gfal2 is already installed : i) build the Storage URL following the model “srm:marsedpm.in2p3.fr:8446/dpm/in2p3.fr/home/biomed” ; NOTE 1: the model works for DPM SEs, not sure about storm or dCache NOTE 2: would be interesing to use the probe for building this URL ii) use gfal-ls to check that we can list the folder <code>[spop@biomed-ui ~]$ gfal-ls srm:marsedpm.in2p3.fr:8446/dpm/in2p3.fr/home/biomed/user/s/scamarasu </code> iii) use gfal-copy to copy a file (in this case, job.jdl) to the above URL
gfal-copy job.jdl srm://marsedpm.in2p3.fr:8446/dpm/in2p3.fr/home/biomed/user/s/scamarasu/ Copying file:///home/spop/dirac/job.jdl [DONE] after 17s
iv) check the copy was copied and is now listed
gfal-ls srm://marsedpm.in2p3.fr:8446/dpm/in2p3.fr/home/biomed/user/s/scamarasu job.jdl
Note that in some cases, the gfal-ls may work (as well as gfal-mkdir), but not the gfal-copy:
gfal-mkdir srm://clrlcgse01.in2p3.fr:8446/dpm/in2p3.fr/home/biomed/scamarasu
gfal-ls srm://clrlcgse01.in2p3.fr:8446/dpm/in2p3.fr/home/biomed/scamarasu
gfal-copy dirac/job.jdl srm://clrlcgse01.in2p3.fr:8446/dpm/in2p3.fr/home/biomed/scamarasu/ gfal-copy error: 70 (Communication error on send) - Could not open destination: globus_xio: Unable to connect to clrlcgse01.in2p3.fr:2811 globus_xio: System error in connect: Connection refused globus_xio: A system call failed: Connection refused
Ignored alarms
The error “No information for [attribute(s): ['GlueServiceEndpoint', 'GlueSAPath', 'GlueVOInfoPath']]”
occurs when the SE does not publish information in the BDII. This may be due to a network outage, unscheduled downtime…
In such cases, DO NOT SUBMIT a ticket until the SE is back on line.
Reminder: before submitting a ticket, make sure one is not open yet.
Dealing with full SEs
See the full SE procedure.
SE Decommissioning
When a SE is to planned for decommissioning, launch the specific SE decommissioning procedure.
Monitoring CEs
Identify the problems
The ARGO box is the best way to identify faulty resources. You may use the following straight link: Critical issues for service group CREAM-CE
Probes documentation is available at https://wiki.egi.eu/wiki/ROC_SAM_Tests.
Reproduce the problem
Reproduce the problem by one of the two methods below.
Download this test JDL (or this one, since the 1st one seems to fail) , rename it as test_ce_noreq.jdl and submit it to the concerned CE. Check the BDII (lcg-infosites) to get the full name of a queue on that CE and run the command:
glite-ce-job-submit -a -r <CE hostname>:<port>/<queue_name> test_ce_noreq.jdl
Then check that the status and the output when the submit command has completed:
glite-ce-job-status <jobId>
Reminder: before submitting a ticket make sure one is not open yet.
Ignored alarms
Shifters shall focus on failed job submissions in priority: probes emi.cream.CREAMCE-AllowedSubmission
.
However, other faling probes such as emi.cream.CREAMCE-JobCancel
and emi.cream.CREAMCE-JobPurge
may be the sign of the fact that the test did not follow the expected workflow, hence tests should also be performed in those cases.
Some investigation is often needed to understand what the problem is, and whether a ticket should be submitted. In particular, hereafter are several reasons that should lead to ignoring alarms, but not necessarily:
- The service is in downtime or it is not in proper production status: check out VAPOR for supporting resources or faulty resources.
- The queue is disabled: its status may be “Closed” or “Draining”, the error on the log shows the message “queue is disabled”.
- Probe time-outs: ARGO probes are configured to time out after some time. However, that should not raise critical alarms, warning or unknown would be more accurate.
- “No compatible resources”: this type of alarm is most likely a problem on the WMS that did send a job to an inappropriate CE. A non urgent ticket may be submitted to require the opinion of the admin.
- Maximum number of jobs already in queue MSG=total number of jobs in queue exceeds the queue limit: each site decides on its policy as to accept of reject biomed jobs. We cannot submit a ticket when the queue is full, given that we use resources in an opportunistic manner.
Check CEs publishing bad number of running/waiting jobs
Check the CEs that publish wrong (default) values for running or waiting jobs on VAPOR report: this can be global CE data (tab Faulty Computing) or per-share job counts (tab Faulty jobs).
The default figure is 4444 or 444444. For each of those, if the CE is in normal production (no downtime, production status), submit a non urgent ticket asking to solve the problem.
To make direct checks in the BDII, use the example LDAP requests provided in the VO Support Tools project. This is a template ticket message that can be reused to report a faulty resource. Don't forget to customize the colored text.
Subject
CE hostname publishes invalid dataBody
Dear site admin,The CE publishes erroneous default data: - no running job - 444444 waiting jobs - ERT = 2146660842 - WRT = 2146660842
This was reported by VAPOR: http://operations-portal.in2p3.fr/vapor/resources/GL2ResVO?VOfilter=biomed (tab Faulty Computing|Faulty jobs).
Note that VAPOR reports job counts published in GLUE2 data which may differ from GLUE1 data. You may want to check the suggestions here: https://wiki.egi.eu/wiki/Tools/Manuals/TS59
Thx in advance for your support. <shifter name>, for the Biomed VO.
CVMFS support
Biomed progressively migrates to the CVMFS solution (Cern Virtual Machine File System) to manage VO specific software. In time, it should replace the variable VO_BIOMED_SW_DIR.
To do so, biomed VO administrators submit tickets to sites supporting CVMFS to ask them to enable biomed in their configuration. Shifters are requested to follow-up on those tickets: in particular, when site admins agree to enable biomed they ask us to test it, shifters then have to test the CVMFS service by submitting a job to one CE on that site. Use this test script and this jdl. Example:
export CE=ce04-lcg.cr.cnaf.infn.it:8443/cream-lsf-biomed glite-wms-job-submit -a -o job_id.txt -r $CE test_ce.jdl
If you need to ask a site to enable biomed, you may want to copy one of the tickets submitted recently.
Advices about ticket submission
Before a ticket is submitted
- Check that the ARGO alarm can be reproduced manually (make sure it is not a monitoring issue) and that it still happens some time after it was first detected (make sure it is not a temporary error).
- Check that no GGUS ticket is already open on this issue: look at open team tickets.
- Check that the concerned host is not in status “downtime”, “not in production”, or “not monitored” using VAPOR's supporting resources or faulty resources on VAPOR.
During ticket submission
- Submit a team ticket. This will ensure that the next teams on duty will follow-up on the tickets submitted during your shift.
- Specify
biomed
in theConcerned VO
field of the GGUS submission form. - Clearly specify the concerned service name in the subject (i.e. “LFC”, “VOMS”, “SE” and the SE name, “CE” and the CE name, etc.) to facilitate further searching in the ticket database.
- Priority: local incidents (e.g. 1 SE is down) should be at priority “urgent” unless more than 50% of the sites are down (then set priority to “very urgent”). Incidents stopping the production (i.e. LFC or VOMS down) should be at “top priority”.
This is a template ticket message that can be reused to report a faulty resource. Don't forget to customize the bold text.
Subject
{SE|CE|WMS|…} hostname is not working for the biomed VOBody
Dear site admin,<hostname> is not working for Biomed users. The incident was detected from the Biomed ARGO box that you may want to check to see the status: <link to the ARGO alarm page> The problem was reproduced by hand in the log below.
Thanks in advance for your support,
<shifter name>, for the Biomed VO.
<detailed log>
Ticket follow-up: general remarks
- Sites announcing that they give up with biomed VO support should be asked to send a notification to biomed-vo-managers [at] googlegroups [dot] com and to kindly keep the site up to allow for file migration. In this case the SE decommissioning procedure must be initiated.
- Sites claiming to be in downtime should be asked to remove their entries from the BDII. Sites showing in
lcg-infosites
will be assumed in operations and are a potential ticket target. - Messages from sites admins sometimes seem impolite (e.g. the ticket is put in status
solved
without a single comment and the problem still persists). This may be the result of an automatic action from the local system used to answer GGUS tickets, and not necessarily from a person.