META TOPICPARENT |
name="WebHome" |
UC Davis User Documentation
Creating a Proxy
For most of the operations: job submission and data access. It is recommended to create a proxy with the VOMS extension of your vo like this
voms-proxy-init -voms cms
Job Submission
This section shows the basics needed to start submitting jobs through HTCondor. For more detailed instructions about using HTCondor, please see the link to the user manual below in the References section.
Submit File
In order to submit jobs through condor, you must first write a submit file. The name of the file is arbitrary but we will call it job.condor in this document.
Example submit file:
universe = grid
grid_resource = condor uclhc-1.ucr.edu 10.0.12.6
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue
This example assumes job.condor and the test.sh executable are in the current directory, and a logs subdirectory is also already present in the current directory. Condor will create the test.log and send the job's stdout and stderr to test.out.$(Cluster).$(Process) and test.err.$(Cluster).$(Process) respectively.
Jobs can be submitted to condor using the following command:
condor_submit job.condor
Targeting Resources
The UCLHC setup allows you to chose a particular domain to run on. By default jobs will run on the slots locally in the brick, as well as in the local batch system of the site. You can further choose to run outside to all UCs and also to the SDSC Comet cluster. These are each controlled by adding special booleans to the submit file. The following table lists the flags, their defaults, and descriptions:
flag |
default |
description |
+local |
true |
run on the brick |
+site_local |
true |
run in your own local site batch system |
+sdsc |
false |
run at Comet |
+uc |
false |
run at all other UCs |
Example submit file to restrict jobs to only run at SDSC and not locally:
<--/twistyPlugin twikiMakeVisibleInline-->
universe = grid
grid_resource = condor uclhc-1.ucr.edu 10.0.12.6
+local = false
+site_local = false
+sdsc = true
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never queue
<--/twistyPlugin-->
Querying Jobs
The follwing will show a list of your jobs on the queue:
condor_q <username>
Screen dump:
<--/twistyPlugin twikiMakeVisibleInline-->
[1627] jdost@uclhc-1 ~$ condor_q jdost
-- Submitter: uclhc-1.ps.uci.edu : <192.5.19.13:9615?sock=76988_ce0d_4> : uclhc-1.ps.uci.edu
ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD
29.0 jdost 8/21 16:25 0+00:01:46 R 0 0.0 test.sh 300
29.1 jdost 8/21 16:25 0+00:01:46 R 0 0.0 test.sh 300
29.2 jdost 8/21 16:25 0+00:01:46 R 0 0.0 test.sh 300
29.3 jdost 8/21 16:25 0+00:01:46 R 0 0.0 test.sh 300
29.4 jdost 8/21 16:25 0+00:01:46 R 0 0.0 test.sh 300
5 jobs; 0 completed, 0 removed, 0 idle, 5 running, 0 held, 0 suspended
<--/twistyPlugin-->
Detailed classads can be dumped for a particular job with the -l flag:
condor_q -l $(Cluster).$(Process)
Canceling Jobs
You can cancel all of your own jobs at any time with the following:
condor_rm <username>
Or alternatively choose a specific job with the $(Cluster).$(Process) numbers, e.g.:
condor_rm 26.0
Data Access
Reading from data should be done via xrootd. The following areas are available by default:
- Any data already made available by the AAA federation
- Other areas local to your site may be exported and as well, depending on the setup.
NOTE Data from these areas are read-only when accessed remotely through xrootd
XRootD Proxy Caching
To improve performance, and conserve on network I/O, reads should happen through xrootd caching proxies. Due to the flexibility of job submission, the nearest xrootd cache is not generally known in advance. Thus a convenience environment variable is provided, CMS_XROOTD_CACHE which can be used in your application to access files from xrootd.
Example AAA Access Using xrdcp
xrdcp root://${CMS_XROOTD_CACHE}//store/mc/RunIIFall15DR76/BulkGravTohhTohVVhbb_narrow_M-900_13TeV-madgraph/AODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/10000/40B50F72-5BB4-E511-A31F-001517FB1B60.root .
References
<-- TWIKI VARIABLES
- Set HOSTNAME = uclhc-1.ucr.edu
- Set LOCAL_HOSTNAME = 10.0.12.6
- Set VO_UPPER = CMS
- Set VO_LOWER=cms
- Set UC_LOWER = ucr
- Set FED = AAA
-->
-- JeffreyDost - 2016/12/22 |