UC Santa Cruz User Documentation

Creating a Proxy

For most of the operations: job submission and data access. It is recommended to create a proxy with the VOMS extension of your vo like this

voms-proxy-init -voms atlas

Job Submission

This section shows the basics needed to start submitting jobs through HTCondor. For more detailed instructions about using HTCondor, please see the link to the user manual below in the References section.

Submit File

In order to submit jobs through condor, you must first write a submit file. The name of the file is arbitrary but we will call it job.condor in this document.

Example submit file:

universe = grid
grid_resource = condor uclhc-1.ucsc.edu 192.168.100.14
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue

This example assumes job.condor and the test.sh executable are in the current directory, and a logs subdirectory is also already present in the current directory. Condor will create the test.log and send the job's stdout and stderr to test.out.$(Cluster).$(Process) and test.err.$(Cluster).$(Process) respectively.

Jobs can be submitted to condor using the following command:

condor_submit job.condor

Targeting Resources

The UCLHC setup allows you to chose a particular domain to run on. By default jobs will run on the slots locally in the brick, as well as in the local batch system of the site. You can further choose to run outside to all UCs and also to the SDSC Comet cluster. These are each controlled by adding special booleans to the submit file. The following table lists the flags, their defaults, and descriptions:

flag default description
+local true run on the brick
+site_local true run in your own local site batch system
+sdsc false run at Comet
+uc false run at all other UCs

Example submit file to restrict jobs to only run at SDSC and not locally:

universe = grid
grid_resource = condor uclhc-1.ucsc.edu 192.168.100.14
+local = false
+site_local = false
+sdsc = true
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never queue 

Querying Jobs

The follwing will show a list of your jobs on the queue:

 condor_q <username>

Screen dump:

[1627] jdost@uclhc-1 ~$ condor_q jdost


-- Submitter: uclhc-1.ps.uci.edu : <192.5.19.13:9615?sock=76988_ce0d_4> : uclhc-1.ps.uci.edu
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
  29.0   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.1   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.2   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.3   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.4   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       

5 jobs; 0 completed, 0 removed, 0 idle, 5 running, 0 held, 0 suspended

Detailed classads can be dumped for a particular job with the -l flag:

condor_q -l $(Cluster).$(Process)

Canceling Jobs

You can cancel all of your own jobs at any time with the following:

condor_rm <username>

Or alternatively choose a specific job with the $(Cluster).$(Process) numbers, e.g.:

condor_rm 26.0

Data Access

Reading from data should be done via xrootd. The following areas are available by default:

  1. Any data already made available by the FAX federation
  2. Other areas local to your site may be exported and as well, depending on the setup.

ALERT! NOTE Data from these areas are read-only when accessed remotely through xrootd

XRootD Proxy Caching

To improve performance, and conserve on network I/O, reads should happen through xrootd caching proxies. Due to the flexibility of job submission, the nearest xrootd cache is not generally known in advance. Thus a convenience environment variable is provided, ATLAS_XROOTD_CACHE which can be used in your application to access files from xrootd.

Example FAX Access Using xrdcp

xrdcp root://${ATLAS_XROOTD_CACHE}//atlas/rucio/user/ivukotic:user.ivukotic.xrootd.wt2-1M .

References

-- JeffreyDost - 2016/12/23

Topic revision: r1 - 2016/12/23 - 16:58:48 - JeffreyDost
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback