2009-06-17 01:28:46,578:['It appears that the value of pthread_mutex_init is 6813744\n', '\n', 'ERROR: Failed to open command file (No such file or directory)\n']
In the log it also tells you how many jobs of this task it failed to create and submit.
However, that info is already wrong, as it generally submits more than it thinks it does.
Proof of this problem
Take as an example the task:
spadhi_crab_0_090617_095513_lv5k24
It is supposed to have 1609 jobs in the task.
The ComponentLog? file says all of them failed to submit.
The actual logs tell you that 399 out of 1609 were created and we don't know how many were submitted in the first round.
Here's how to find the proof:
[0158] spadhi@glidein-2 ~$ ls /var/dropBox/spadhi_crab_0_090617_095513_lv5k24_spec/*lv5k24_job*.jdl |wc -l
399
Next it retries (at most 3 times), and during the retry it creates additional jobs, as well as duplicates.
We discuss duplicates in the next section.
Multiple submission of same jdl for incompletely submitted tasks
For tasks that failed initially, it tries 3 times, and each time it potentially creates the same job again.
This leads to cases where the same job runs more than once.
Identify the problem
ls /var/dropBox/spadhi_crab_0_090617_095513_lv5k24_spec/*.jdl | sed -e 's/.*_job*//' | sort | uniq -c
This counts for every job how many different jdl's were created in the dropBox for that job.
Proof of this problem
From the example below, you can see that out of 1609 jobs top submit, 403 were actually created, and out of those, 51 were created more than once, and three were created 3 times.
We can then look at the submission log for the 3 that were created 3 times, and find for example for job 147:
0231] spadhi@glidein-2 ~$ less /var/gftp_cache/spadhi_crab_0_090617_095513_lv5k24/CMSSW_147.log
000 (399472.000.000) 06/17 01:04:11 Job submitted from host: <169.228.130.11:48908>
...
000 (401888.000.000) 06/17 01:16:14 Job submitted from host: <169.228.130.11:48908>
...
000 (405516.000.000) 06/17 01:38:22 Job submitted from host: <169.228.130.11:48908>
...
So this job was indeed submitted 3 times.
Below is more details on this job:
SchedulerJobId? not properly specified by crab at submit time (fixed as of 6/18/09)
We see that the dashboard does not show the SchedulerJobId? for jobs that were submitted but are still pending.
Julia tells us that this is because the crabserver does not form the SchedulerJobId? correctly at submit time.
As a result, the dashboard does not parse it correctly, and thus we see nothing.
This makes the debugging of the multiple jobs per jobId more difficult because they actually overwrite each other in the dashboard.
However the site do have a valid CE according to bdii:
ce01-cms.lip.pt:2119/jobmanager-lcgsge-cmsgrid
This leads to whole tasks disappearing in the crab server, without the user being informed about it.
Problem with the way proxies are updated
It appears that "something" in the crabserver keeps the user proxy updated from myproxyserver.
Whatever mechanism does this, does it in such a way to cause problems with glexec, and condor.
Basically, the proxy on disk is touched and maybe rewritten (?). This is not done as an atomic (i.e. fast) process.
As a result, both condor and glexec sometimes find a corrupted/incomplete/inconsistent proxy on disk while they try to access it.
This leads to both of them failing.
The way to do this better would be to write the new proxy into a seperate file, and then mv the file to its proper place.
As of right now, we do not know what piece of software inside crabserver does this.
Our knowledge of this going on comes from:
condor core dump analysis
glexec error "no file is found"
Where to find what logs on crabserver
Here we document the directories where you find stuff on the glidein-2 crab server.
All directories are on glidein-2, of course.
The jdl and such created for submission to condor
/var/dropBox/
This directory has one subdirectory per task. The subdirectory names are of the following structure: