UC Irvine User Documentation

Creating a Proxy

For most of the operations: job submission and data access. It is recommended to create a proxy with the VOMS extension of your vo like this

voms-proxy-init -voms atlas

Job Submission

This section shows the basics needed to start submitting jobs through HTCondor. For more detailed instructions about using HTCondor, please see the link to the user manual below in the References section.

Submit File

In order to submit jobs through condor, you must first write a submit file. The name of the file is arbitrary but we will call it job.condor in this document.

Example submit file:

universe = vanilla
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue

This example assumes job.condor and the test.sh executable are in the current directory, and a logs subdirectory is also already present in the current directory. Condor will create the test.log and send the job's stdout and stderr to test.out.$(Cluster).$(Process) and test.err.$(Cluster).$(Process) respectively.

Jobs can be submitted to condor using the following command:

condor_submit job.condor

Targeting Resources

The UCLHC setup allows you to chose a particular domain to run on. By default jobs will run on the slots locally in the brick, as well as in the local batch system of the site. You can further choose to run outside to all UCs and also to the SDSC Comet cluster. These are each controlled by adding special booleans to the submit file. The following table lists the flags, their defaults, and descriptions:

flagSorted ascending default description
+local true run on the brick
+sdsc false run at Comet
+site_local true run in your own local site batch system
+uc false run at all other UCs

Example submit file to restrict jobs to only run at SDSC and not locally:

universe = vanilla
+local = false
+site_local = false
+sdsc = true
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never queue 

Querying Jobs

The follwing will show a list of your jobs on the queue:

 condor_q <username>

Screen dump:

[1627] jdost@uclhc-1 ~$ condor_q jdost


-- Submitter: uclhc-1.ps.uci.edu : <192.5.19.13:9615?sock=76988_ce0d_4> : uclhc-1.ps.uci.edu
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
  29.0   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.1   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.2   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.3   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.4   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       

5 jobs; 0 completed, 0 removed, 0 idle, 5 running, 0 held, 0 suspended

Detailed classads can be dumped for a particular job with the -l flag:

condor_q -l $(Cluster).$(Process)

Canceling Jobs

You can cancel all of your own jobs at any time with the following:

condor_rm <username>

Or alternatively choose a specific job with the $(Cluster).$(Process) numbers, e.g.:

condor_rm 26.0

Data Access

Reading from data should be done via xrootd. The following areas are available by default:

  1. Any data already made available by the FAX federation
  2. Other areas local to your site may be exported and as well, depending on the setup.

ALERT! NOTE Data from these areas are read-only when accessed remotely through xrootd

Exported Local Disk Area

User data directories are provided on the brick and are exported through xrootd to be visible to the grid:

/data/uclhc/uci/user/<username>

ALERT! NOTE The physical path starts with /data when accessed locally through the filesystem (ls, rm, etc). However the logical path when accessed remotely from xrootd begins with /uclhc. See the read examples below.

XRootD Proxy Caching

To improve performance, and conserve on network I/O, reads should happen through xrootd caching proxies. Due to the flexibility of job submission, the nearest xrootd cache is not generally known in advance. Thus a convenience environment variable is provided, ATLAS_XROOTD_CACHE which can be used in your application to access files from xrootd.

Example Brick Access Using xrdcp

xrdcp root://${ATLAS_XROOTD_CACHE}//uclhc/uci/user/jdost/test.txt .

Example FAX Access Using xrdcp

xrdcp root://${ATLAS_XROOTD_CACHE}//atlas/rucio/user/ivukotic:user.ivukotic.xrootd.wt2-1M .

Transferring Output

Since xrootd is configured as a read-only system, you should use the condor file transfer mechanism to transfer job output back home to the brick.

The following example assumes the test.sh executable generates an output file called test.out. This is an example of a condor submit file to make condor transfer the output back to the user data area. The relevant attributes are in bold:

universe = vanilla
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
transfer_output_files = test.out
transfer_output_remaps = "test.out = /data/uclhc/uci/user/jdost/test.out"
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue  

Note that transfer_output_remaps is used here because without it, by default condor will return the output file to the working directory condor_submit was run from.

References

-- JeffreyDost - 2015/08/21

Topic revision: r6 - 2016/05/26 - 15:16:55 - JeffreyDost
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback