Difference: UserDoc (1 vs. 19)

Revision 192016/12/23 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Revision 182016/12/22 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 310 to 310
 

References

Changed:
<
<
>
>
 
Added:
>
>
 -- JeffreyDost - 2015/08/18

Revision 172016/05/26 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 222 to 222
 

Data Access

Changed:
<
<
>
>
  Reading from data should be done via xrootd. The following areas are available by default:
Deleted:
<
<
  1. Local disk area on the brick
 
  1. Any data already made available by the %FED% federation
  2. Other areas local to your site may be exported and as well, depending on the setup.
Changed:
<
<
ALERT! NOTE Data from thes areas are read-only when accessed remotely through xrootd
>
>
ALERT! NOTE Data from these areas are read-only when accessed remotely through xrootd
 

Exported Local Disk Area

Added:
>
>
 User data directories are provided on the brick and are exported through xrootd to be visible to the grid:

/data/uclhc/%UC_LOWER%/user/<username>

ALERT! NOTE The physical path starts with /data when accessed locally through the filesystem (ls, rm, etc). However the logical path when accessed remotely from xrootd begins with /uclhc. See the read examples below.

Added:
>
>
 

XRootD Proxy Caching

Added:
>
>
 To improve performance, and conserve on network I/O, reads should happen through xrootd caching proxies. Due to the flexibility of job submission, the nearest xrootd cache is not generally known in advance. Thus a convenience environment variable is provided, %VO_UPPER%_XROOTD_CACHE which can be used in your application to access files from xrootd.
Added:
>
>

 

Example Brick Access Using xrdcp

xrdcp root://${%VO_UPPER%_XROOTD_CACHE}//uclhc/%UC_LOWER%/user/jdost/test.txt .
Changed:
<
<
>
>
 

Example FAX Access Using xrdcp

Line: 256 to 267
 
Added:
>
>

Example AAA Access Using xrdcp

xrdcp root://${CMS_XROOTD_CACHE}//store/mc/RunIIFall15DR76/BulkGravTohhTohVVhbb_narrow_M-900_13TeV-madgraph/AODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/10000/40B50F72-5BB4-E511- A31F-001517FB1B60.root .

 

Transferring Output

Line: 290 to 308
 
Deleted:
<
<

Condor-C Transferring Output

Since xrootd is configured as a read-only system, you should use the condor file transfer mechanism to transfer job output back home to the brick.

The following example assumes the test.sh executable generates an output file called test.out. This is an example of a condor submit file to make condor transfer the output back to the user data area. The relevant attributes are in bold:

<--/twistyPlugin twikiMakeVisibleInline-->
universe = grid
grid_resource = condor %HOSTNAME% %LOCAL_HOSTNAME%
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
transfer_output_files = test.out
transfer_output_remaps = "test.out = /data/uclhc/%UC_LOWER%/user/jdost/test.out"
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue  
<--/twistyPlugin-->

Note that transfer_output_remaps is used here because without it, by default condor will return the output file to the working directory condor_submit was run from.

 

References

Revision 162016/05/26 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 114 to 114
 
Added:
>
>

Condor-C Job Submission

This section shows the basics needed to start submitting jobs through HTCondor. For more detailed instructions about using HTCondor, please see the link to the user manual below in the References section.

Submit File

In order to submit jobs through condor, you must first write a submit file. The name of the file is arbitrary but we will call it job.condor in this document.

Example submit file:

universe = grid
grid_resource = condor %HOSTNAME% %LOCAL_HOSTNAME%
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue

This example assumes job.condor and the test.sh executable are in the current directory, and a logs subdirectory is also already present in the current directory. Condor will create the test.log and send the job's stdout and stderr to test.out.$(Cluster).$(Process) and test.err.$(Cluster).$(Process) respectively.

Jobs can be submitted to condor using the following command:

condor_submit job.condor

Targeting Resources

The UCLHC setup allows you to chose a particular domain to run on. By default jobs will run on the slots locally in the brick, as well as in the local batch system of the site. You can further choose to run outside to all UCs and also to the SDSC Comet cluster. These are each controlled by adding special booleans to the submit file. The following table lists the flags, their defaults, and descriptions:

flag default description
+local true run on the brick
+site_local true run in your own local site batch system
+sdsc false run at Comet
+uc false run at all other UCs

Example submit file to restrict jobs to only run at SDSC and not locally:

<--/twistyPlugin twikiMakeVisibleInline-->
universe = grid
grid_resource = condor %HOSTNAME% %LOCAL_HOSTNAME%
+local = false
+site_local = false
+sdsc = true
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never queue 
<--/twistyPlugin-->

Querying Jobs

The follwing will show a list of your jobs on the queue:

 condor_q <username>

Screen dump:

<--/twistyPlugin twikiMakeVisibleInline-->
[1627] jdost@uclhc-1 ~$ condor_q jdost


-- Submitter: uclhc-1.ps.uci.edu : <192.5.19.13:9615?sock=76988_ce0d_4> : uclhc-1.ps.uci.edu
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
  29.0   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.1   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.2   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.3   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.4   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       

5 jobs; 0 completed, 0 removed, 0 idle, 5 running, 0 held, 0 suspended
<--/twistyPlugin-->

Detailed classads can be dumped for a particular job with the -l flag:

condor_q -l $(Cluster).$(Process)

Canceling Jobs

You can cancel all of your own jobs at any time with the following:

condor_rm <username>

Or alternatively choose a specific job with the $(Cluster).$(Process) numbers, e.g.:

condor_rm 26.0

 

Data Access

Line: 184 to 290
 
Added:
>
>

Condor-C Transferring Output

Since xrootd is configured as a read-only system, you should use the condor file transfer mechanism to transfer job output back home to the brick.

The following example assumes the test.sh executable generates an output file called test.out. This is an example of a condor submit file to make condor transfer the output back to the user data area. The relevant attributes are in bold:

<--/twistyPlugin twikiMakeVisibleInline-->
universe = grid
grid_resource = condor %HOSTNAME% %LOCAL_HOSTNAME%
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
transfer_output_files = test.out
transfer_output_remaps = "test.out = /data/uclhc/%UC_LOWER%/user/jdost/test.out"
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue  
<--/twistyPlugin-->

Note that transfer_output_remaps is used here because without it, by default condor will return the output file to the working directory condor_submit was run from.

 

References

Revision 152015/10/12 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 8 to 8
  For most of the operations: job submission and data access. It is recommended to create a proxy with the VOMS extension of your vo like this

voms-proxy-init -voms %VO_LOWER%
Added:
>
>
 

Job Submission

Revision 142015/09/29 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 65 to 65
 when_to_transfer_output = ON_EXIT log = logs/test.log output = logs/test.out.$(Cluster).$(Process)
Deleted:
<
<
use_x509userproxy = True
 error = logs/test.err.$(Cluster).$(Process)
Added:
>
>
use_x509userproxy = True
 notification = Never queue </>
<--/twistyPlugin-->

Revision 132015/09/29 - Main.EdgarHernandez

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 65 to 65
 when_to_transfer_output = ON_EXIT log = logs/test.log output = logs/test.out.$(Cluster).$(Process)
Added:
>
>
use_x509userproxy = True
 error = logs/test.err.$(Cluster).$(Process) notification = Never queue </>
<--/twistyPlugin-->
Line: 173 to 174
 log = logs/test.log output = logs/test.out.$(Cluster).$(Process) error = logs/test.err.$(Cluster).$(Process)
Added:
>
>
use_x509userproxy = True
 notification = Never queue </>
<--/twistyPlugin-->

Revision 122015/09/29 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 55 to 55
 showimgleft="/twiki2/pub/TWiki/TWikiDocGraphics/toggleopen-small.gif" hideimgleft="/twiki2/pub/TWiki/TWikiDocGraphics/toggleclose-small.gif" }%
Changed:
<
<
universe = vanilla *+local = false* *+site_local = false* *+sdsc = true* executable = test.sh arguments = 300 should_transfer_files = YES when_to_transfer_output = ON_EXIT log = logs/test.log output = logs/test.out.$(Cluster).$(Process) error = logs/test.err.$(Cluster).$(Process) notification = Never queue 
>
>
universe = vanilla
+local = false
+site_local = false
+sdsc = true
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
notification = Never queue 
 </>
<--/twistyPlugin-->

Querying Jobs

Line: 152 to 163
 showimgleft="/twiki2/pub/TWiki/TWikiDocGraphics/toggleopen-small.gif" hideimgleft="/twiki2/pub/TWiki/TWikiDocGraphics/toggleclose-small.gif" }%
Changed:
<
<
universe = vanilla  
executable = test.sh arguments = 300  
should_transfer_files = YES
when_to_transfer_output = ON_EXIT 
transfer_output_files = test.out
transfer_output_remaps = "test.out = /data/uclhc/%UC_LOWER%/user/jdost/test.out"
log = logs/test.log  
output = logs/test.out.$(Cluster).$(Process) 
 error = logs/test.err.$(Cluster).$(Process)  
notification = Never 
queue  
>
>
universe = vanilla
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
transfer_output_files = test.out
transfer_output_remaps = "test.out = /data/uclhc/%UC_LOWER%/user/jdost/test.out"
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
notification = Never
queue  
 </>
<--/twistyPlugin-->

Note that transfer_output_remaps is used here because without it, by default condor will return the output file to the working directory condor_submit was run from.

Revision 112015/09/16 - Main.EdgarHernandez

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 7 to 7
 

Creating a proxy.

For most of the operations: job submission and data access. It is recommended to create a proxy with the VOMS extension of your vo like this
Changed:
<
<
voms-proxy-init -voms %VO_UPPER%
>
>
voms-proxy-init -voms %VO_LOWER%
 

Job Submission

Line: 27 to 27
 log = logs/test.log output = logs/test.out.$(Cluster).$(Process) error = logs/test.err.$(Cluster).$(Process)
Added:
>
>
use_x509userproxy = True
 notification = Never queue

Revision 102015/08/25 - Main.EdgarHernandez

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 151 to 151
 showimgleft="/twiki2/pub/TWiki/TWikiDocGraphics/toggleopen-small.gif" hideimgleft="/twiki2/pub/TWiki/TWikiDocGraphics/toggleclose-small.gif" }%
Changed:
<
<
universe = vanilla 
executable = test.sh 
arguments = 300 
*should_transfer_files = YES*
*when_to_transfer_output = ON_EXIT*
*transfer_output_files = test.out*
*transfer_output_remaps = "test.out = /data/uclhc/%UC_LOWER%/user/jdost/test.out"* 
log = logs/test.log 
output = logs/test.out.$(Cluster).$(Process) 
error = logs/test.err.$(Cluster).$(Process) 
notification = Never queue 
>
>
universe = vanilla  
executable = test.sh arguments = 300  
should_transfer_files = YES
when_to_transfer_output = ON_EXIT 
transfer_output_files = test.out
transfer_output_remaps = "test.out = /data/uclhc/%UC_LOWER%/user/jdost/test.out"
log = logs/test.log  
output = logs/test.out.$(Cluster).$(Process) 
 error = logs/test.err.$(Cluster).$(Process)  
notification = Never 
queue  
 </>
<--/twistyPlugin-->

Note that transfer_output_remaps is used here because without it, by default condor will return the output file to the working directory condor_submit was run from.

Revision 92015/08/24 - Main.EdgarHernandez

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 151 to 151
 showimgleft="/twiki2/pub/TWiki/TWikiDocGraphics/toggleopen-small.gif" hideimgleft="/twiki2/pub/TWiki/TWikiDocGraphics/toggleclose-small.gif" }%
Changed:
<
<
universe = vanilla executable = test.sh arguments = 300 *should_transfer_files = YES* *when_to_transfer_output = ON_EXIT* *transfer_output_files = test.out* *transfer_output_remaps = "test.out = /data/uclhc/%UC_LOWER%/user/jdost/test.out"* log = logs/test.log output = logs/test.out.$(Cluster).$(Process) error = logs/test.err.$(Cluster).$(Process) notification = Never queue 
>
>
universe = vanilla 
executable = test.sh 
arguments = 300 
*should_transfer_files = YES*
*when_to_transfer_output = ON_EXIT*
*transfer_output_files = test.out*
*transfer_output_remaps = "test.out = /data/uclhc/%UC_LOWER%/user/jdost/test.out"* 
log = logs/test.log 
output = logs/test.out.$(Cluster).$(Process) 
error = logs/test.err.$(Cluster).$(Process) 
notification = Never queue 
 </>
<--/twistyPlugin-->

Note that transfer_output_remaps is used here because without it, by default condor will return the output file to the working directory condor_submit was run from.

Revision 82015/08/24 - Main.EdgarHernandez

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Added:
>
>

Creating a proxy.

For most of the operations: job submission and data access. It is recommended to create a proxy with the VOMS extension of your vo like this

voms-proxy-init -voms %VO_UPPER%
 

Job Submission

Changed:
<
<
This section shows the basics needed to start submitting jobs through HTCondor. For more detailed instructions about using HTCondor, please see the link to the user manual below in the References section.
>
>
This section shows the basics needed to start submitting jobs through HTCondor. For more detailed instructions about using HTCondor, please see the link to the user manual below in the References section.
 

Submit File

Line: 50 to 54
 showimgleft="/twiki2/pub/TWiki/TWikiDocGraphics/toggleopen-small.gif" hideimgleft="/twiki2/pub/TWiki/TWikiDocGraphics/toggleclose-small.gif" }%
Changed:
<
<
universe = vanilla
+local = false
+site_local = false
+sdsc = true
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
notification = Never
queue
>
>
universe = vanilla *+local = false* *+site_local = false* *+sdsc = true* executable = test.sh arguments = 300 should_transfer_files = YES when_to_transfer_output = ON_EXIT log = logs/test.log output = logs/test.out.$(Cluster).$(Process) error = logs/test.err.$(Cluster).$(Process) notification = Never queue 
 </>
<--/twistyPlugin-->

Querying Jobs

Line: 115 to 105
 

Reading from data should be done via xrootd. The following areas are available by default:

Changed:
<
<
  1. Local disk area on the brick
  2. Any data already made available by the %FED% federation
  3. Other areas local to your site may be exported and as well, depending on the setup.
>
>
  1. Local disk area on the brick
  2. Any data already made available by the %FED% federation
  3. Other areas local to your site may be exported and as well, depending on the setup.
  ALERT! NOTE Data from thes areas are read-only when accessed remotely through xrootd
Line: 161 to 151
 showimgleft="/twiki2/pub/TWiki/TWikiDocGraphics/toggleopen-small.gif" hideimgleft="/twiki2/pub/TWiki/TWikiDocGraphics/toggleclose-small.gif" }%
Changed:
<
<
universe = vanilla
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
transfer_output_files = test.out
transfer_output_remaps = "test.out = /data/uclhc/%UC_LOWER%/user/jdost/test.out"
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
notification = Never
queue
>
>
universe = vanilla executable = test.sh arguments = 300 *should_transfer_files = YES* *when_to_transfer_output = ON_EXIT* *transfer_output_files = test.out* *transfer_output_remaps = "test.out = /data/uclhc/%UC_LOWER%/user/jdost/test.out"* log = logs/test.log output = logs/test.out.$(Cluster).$(Process) error = logs/test.err.$(Cluster).$(Process) notification = Never queue 
 </>
<--/twistyPlugin-->

Note that transfer_output_remaps is used here because without it, by default condor will return the output file to the working directory condor_submit was run from.

Revision 72015/08/22 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 6 to 6
 

Job Submission

Deleted:
<
<

Submit File

 
Added:
>
>
This section shows the basics needed to start submitting jobs through HTCondor. For more detailed instructions about using HTCondor, please see the link to the user manual below in the References section.

Submit File

 
Changed:
<
<
In order to submit jobs through condor, one must first write a submit file. The name of the file is arbitrary but we will call it job.condor in this document.
>
>
In order to submit jobs through condor, you must first write a submit file. The name of the file is arbitrary but we will call it job.condor in this document.
  Example submit file:
Line: 28 to 29
  This example assumes job.condor and the test.sh executable are in the current directory, and a logs subdirectory is also already present in the current directory. Condor will create the test.log and send the job's stdout and stderr to test.out.$(Cluster).$(Process) and test.err.$(Cluster).$(Process) respectively.
Changed:
<
<
The user can then submit their job to condor using the following command:
>
>
Jobs can be submitted to condor using the following command:
 
condor_submit job.condor

Targeting Resources

Changed:
<
<
The UCLHC setup allows users to chose a particular domain to run on. By default jobs will run on the slots locally in the brick, as well as in the local batch system of the site. A user can further choose to run outside to all UCs and also to the SDSC Comet cluster. These are each controlled by adding special booleans to the submit file. The following table lists the flags, their defaults, and descriptions:
>
>
The UCLHC setup allows you to chose a particular domain to run on. By default jobs will run on the slots locally in the brick, as well as in the local batch system of the site. You can further choose to run outside to all UCs and also to the SDSC Comet cluster. These are each controlled by adding special booleans to the submit file. The following table lists the flags, their defaults, and descriptions:
 
flag default description
+local true run on the brick
Line: 41 to 42
 
+sdsc false run at Comet
+uc false run at all other UCs
Changed:
<
<
Example submit file for user who wants to only run at SDSC and not locally:
>
>
Example submit file to restrict jobs to only run at SDSC and not locally:
 %TWISTY{ mode="div" showlink="Show..."
Line: 68 to 69
 

Querying Jobs

Changed:
<
<
The follwing will show a list of jobs on the queue:
>
>
The follwing will show a list of your jobs on the queue:
 
 condor_q <username>
Line: 101 to 102
 

Canceling Jobs

Changed:
<
<
A user can cancel all their jobs at any time with the following:
>
>
You can cancel all of your own jobs at any time with the following:
 
condor_rm <username>
Changed:
<
<
Or alternatively choosing a specific job with the $(Cluster).$(Process) numbers, e.g.:
>
>
Or alternatively choose a specific job with the $(Cluster).$(Process) numbers, e.g.:
 
condor_rm 26.0
Line: 113 to 114
 
Changed:
<
<
Reading from data should be done via xrootd. The following areas available by default:
>
>
Reading from data should be done via xrootd. The following areas are available by default:
 
  1. Local disk area on the brick
  2. Any data already made available by the %FED% federation
  3. Other areas local to your site may be exported and as well, depending on the setup.
Line: 122 to 123
 

Exported Local Disk Area

Changed:
<
<
We provide a directory on the brick that is exported through xrootd and visible to the grid:
>
>
User data directories are provided on the brick and are exported through xrootd to be visible to the grid:
 
/data/uclhc/%UC_LOWER%/user/<username>
Line: 130 to 131
 

XRootD Proxy Caching

Changed:
<
<
To improve performance, and conserve on network I/O, reads should happen through xrootd caching proxies. Due to the flexibility of job submission, the nearest xrootd cache is not generally known in advance. Thus we provide a convenience environment variable, %VO_UPPER%_XROOTD_CACHE which can be used in the user application to access files from xrootd.
>
>
To improve performance, and conserve on network I/O, reads should happen through xrootd caching proxies. Due to the flexibility of job submission, the nearest xrootd cache is not generally known in advance. Thus a convenience environment variable is provided, %VO_UPPER%_XROOTD_CACHE which can be used in your application to access files from xrootd.
 

Example Brick Access Using xrdcp

Line: 147 to 148
 

Transferring Output

Changed:
<
<
Since we are using xrootd as a read-only system, users should use the condor file transfer mechanism to transfer job output back home to the brick.
>
>

Since xrootd is configured as a read-only system, you should use the condor file transfer mechanism to transfer job output back home to the brick.

The following example assumes the test.sh executable generates an output file called test.out. This is an example of a condor submit file to make condor transfer the output back to the user data area. The relevant attributes are in bold:

<--/twistyPlugin twikiMakeVisibleInline-->
universe = vanilla
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
transfer_output_files = test.out
transfer_output_remaps = "test.out = /data/uclhc/%UC_LOWER%/user/jdost/test.out"
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
notification = Never
queue
<--/twistyPlugin-->

Note that transfer_output_remaps is used here because without it, by default condor will return the output file to the working directory condor_submit was run from.

References

  -- JeffreyDost - 2015/08/18

Revision 62015/08/22 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 6 to 6
 

Job Submission

Added:
>
>

Submit File

 
Added:
>
>
In order to submit jobs through condor, one must first write a submit file. The name of the file is arbitrary but we will call it job.condor in this document.
 Example submit file:
universe=vanilla
executable=test.sh
Added:
>
>
arguments = 300 should_transfer_files = YES
 when_to_transfer_output = ON_EXIT
Added:
>
>
log = logs/test.log output = logs/test.out.$(Cluster).$(Process) error = logs/test.err.$(Cluster).$(Process) notification = Never queue

This example assumes job.condor and the test.sh executable are in the current directory, and a logs subdirectory is also already present in the current directory. Condor will create the test.log and send the job's stdout and stderr to test.out.$(Cluster).$(Process) and test.err.$(Cluster).$(Process) respectively.

The user can then submit their job to condor using the following command:

condor_submit job.condor

Targeting Resources

The UCLHC setup allows users to chose a particular domain to run on. By default jobs will run on the slots locally in the brick, as well as in the local batch system of the site. A user can further choose to run outside to all UCs and also to the SDSC Comet cluster. These are each controlled by adding special booleans to the submit file. The following table lists the flags, their defaults, and descriptions:

flag default description
+local true run on the brick
+site_local true run in your own local site batch system
+sdsc false run at Comet
+uc false run at all other UCs

Example submit file for user who wants to only run at SDSC and not locally:

<--/twistyPlugin twikiMakeVisibleInline-->
universe = vanilla
+local = false
+site_local = false
+sdsc = true
executable = test.sh

 arguments=300
Changed:
<
<
+local=false +site_local=false #+sdsc=false
>
>
should_transfer_files = YES when_to_transfer_output = ON_EXIT
 log=logs/test.log output=logs/test.out.$(Cluster).$(Process) error=logs/test.err.$(Cluster).$(Process) notification=Never
Changed:
<
<
queue 1
>
>
queue
<--/twistyPlugin-->

Querying Jobs

The follwing will show a list of jobs on the queue:

 condor_q <username>

Screen dump:

<--/twistyPlugin twikiMakeVisibleInline-->
[1627] jdost@uclhc-1 ~$ condor_q jdost


-- Submitter: uclhc-1.ps.uci.edu : <192.5.19.13:9615?sock=76988_ce0d_4> : uclhc-1.ps.uci.edu
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
  29.0   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.1   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.2   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.3   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.4   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       

5 jobs; 0 completed, 0 removed, 0 idle, 5 running, 0 held, 0 suspended
 
Added:
>
>
<--/twistyPlugin-->

Detailed classads can be dumped for a particular job with the -l flag:

condor_q -l $(Cluster).$(Process)

Canceling Jobs

A user can cancel all their jobs at any time with the following:

condor_rm <username>

Or alternatively choosing a specific job with the $(Cluster).$(Process) numbers, e.g.:

condor_rm 26.0
 
Line: 47 to 130
 

XRootD Proxy Caching

Changed:
<
<
To improve performance, and conserve on network IO, reads should happen through xrootd caching proxies. Due to the flexibility of job submission, the nearest xrootd cache is not generally known in advance. Thus we provide a convenience environment variable, %VO_UPPER%_XROOTD_CACHE which can be used in the user application to access files from xrootd.
>
>
To improve performance, and conserve on network I/O, reads should happen through xrootd caching proxies. Due to the flexibility of job submission, the nearest xrootd cache is not generally known in advance. Thus we provide a convenience environment variable, %VO_UPPER%_XROOTD_CACHE which can be used in the user application to access files from xrootd.
 

Example Brick Access Using xrdcp

Line: 61 to 144
 
xrdcp root://${ATLAS_XROOTD_CACHE}//atlas/rucio/user/ivukotic:user.ivukotic.xrootd.wt2-1M .
Added:
>
>

Transferring Output

Since we are using xrootd as a read-only system, users should use the condor file transfer mechanism to transfer job output back home to the brick.

 -- JeffreyDost - 2015/08/18

Revision 52015/08/21 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 30 to 30
 
Changed:
<
<
Reading from data should generally be done via xrootd.
>
>
Reading from data should be done via xrootd. The following areas available by default:
  1. Local disk area on the brick
  2. Any data already made available by the %FED% federation
  3. Other areas local to your site may be exported and as well, depending on the setup.

ALERT! NOTE Data from thes areas are read-only when accessed remotely through xrootd

Exported Local Disk Area

We provide a directory on the brick that is exported through xrootd and visible to the grid:

/data/uclhc/%UC_LOWER%/user/<username>

ALERT! NOTE The physical path starts with /data when accessed locally through the filesystem (ls, rm, etc). However the logical path when accessed remotely from xrootd begins with /uclhc. See the read examples below.

XRootD Proxy Caching

  To improve performance, and conserve on network IO, reads should happen through xrootd caching proxies. Due to the flexibility of job submission, the nearest xrootd cache is not generally known in advance. Thus we provide a convenience environment variable, %VO_UPPER%_XROOTD_CACHE which can be used in the user application to access files from xrootd.
Changed:
<
<

Example Access Using xrdcp

>
>

Example Brick Access Using xrdcp

 
xrdcp root://${%VO_UPPER%_XROOTD_CACHE}//uclhc/%UC_LOWER%/user/jdost/test.txt .

Changed:
<
<
-- JeffreyDost - 2015/08/18
>
>

Example FAX Access Using xrdcp

 
Changed:
<
<
<-- TWIKI VARIABLES 
  • Set VO_UPPER = CMS
  • Set UC_LOWER = ucsd
-->
>
>
xrdcp root://${ATLAS_XROOTD_CACHE}//atlas/rucio/user/ivukotic:user.ivukotic.xrootd.wt2-1M .

-- JeffreyDost - 2015/08/18

Revision 42015/08/21 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 6 to 6
 

Job Submission

Added:
>
>
 Example submit file:
universe=vanilla
Line: 22 to 24
 queue 1
Added:
>
>
 

Data Access

Added:
>
>

Reading from data should generally be done via xrootd.

To improve performance, and conserve on network IO, reads should happen through xrootd caching proxies. Due to the flexibility of job submission, the nearest xrootd cache is not generally known in advance. Thus we provide a convenience environment variable, %VO_UPPER%_XROOTD_CACHE which can be used in the user application to access files from xrootd.

Example Access Using xrdcp

xrdcp root://${%VO_UPPER%_XROOTD_CACHE}//uclhc/%UC_LOWER%/user/jdost/test.txt .

 -- JeffreyDost - 2015/08/18 \ No newline at end of file
Added:
>
>
<-- TWIKI VARIABLES 
  • Set VO_UPPER = CMS
  • Set UC_LOWER = ucsd
-->

Revision 32015/08/21 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 22 to 22
 queue 1
Added:
>
>

Data Access

 -- JeffreyDost - 2015/08/18 \ No newline at end of file

Revision 22015/08/18 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

User Documentation

Line: 6 to 6
 

Job Submission

Added:
>
>
Example submit file:
universe=vanilla
executable=test.sh
when_to_transfer_output = ON_EXIT
arguments=300
+local=false
+site_local=false
#+sdsc=false
log=logs/test.log
output=logs/test.out.$(Cluster).$(Process)
error=logs/test.err.$(Cluster).$(Process)
notification=Never
queue 1
 -- JeffreyDost - 2015/08/18

Revision 12015/08/18 - Main.JeffreyDost

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="WebHome"

User Documentation

Job Submission

-- JeffreyDost - 2015/08/18

 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback