Difference: UCDUserDoc (1 vs. 4)

Revision 42016/05/26 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

UC Davis User Documentation

Line: 10 to 10
  This section shows the basics needed to start submitting jobs through HTCondor. For more detailed instructions about using HTCondor, please see the link to the user manual below in the References section.

Submit File

In order to submit jobs through condor, you must first write a submit file. The name of the file is arbitrary but we will call it job.condor in this document.

Example submit file:

universe = grid
grid_resource = condor uclhc-1.tier3.ucdavis.edu 10.8.0.6
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue

This example assumes job.condor and the test.sh executable are in the current directory, and a logs subdirectory is also already present in the current directory. Condor will create the test.log and send the job's stdout and stderr to test.out.$(Cluster).$(Process) and test.err.$(Cluster).$(Process) respectively.

Jobs can be submitted to condor using the following command:

condor_submit job.condor

Targeting Resources

The UCLHC setup allows you to chose a particular domain to run on. By default jobs will run on the slots locally in the brick, as well as in the local batch system of the site. You can further choose to run outside to all UCs and also to the SDSC Comet cluster. These are each controlled by adding special booleans to the submit file. The following table lists the flags, their defaults, and descriptions:

flag default description
+local true run on the brick
+site_local true run in your own local site batch system
+sdsc false run at Comet
+uc false run at all other UCs

Example submit file to restrict jobs to only run at SDSC and not locally:

<--/twistyPlugin twikiMakeVisibleInline-->
universe = grid
grid_resource = condor uclhc-1.tier3.ucdavis.edu 10.8.0.6
+local = false
+site_local = false
+sdsc = true
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never queue 
<--/twistyPlugin-->

Querying Jobs

The follwing will show a list of your jobs on the queue:

 condor_q <username>

Screen dump:

<--/twistyPlugin twikiMakeVisibleInline-->
[1627] jdost@uclhc-1 ~$ condor_q jdost


-- Submitter: uclhc-1.ps.uci.edu : <192.5.19.13:9615?sock=76988_ce0d_4> : uclhc-1.ps.uci.edu
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
  29.0   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.1   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.2   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.3   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.4   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       

5 jobs; 0 completed, 0 removed, 0 idle, 5 running, 0 held, 0 suspended
<--/twistyPlugin-->

Detailed classads can be dumped for a particular job with the -l flag:

condor_q -l $(Cluster).$(Process)

Canceling Jobs

You can cancel all of your own jobs at any time with the following:

condor_rm <username>

Or alternatively choose a specific job with the $(Cluster).$(Process) numbers, e.g.:

condor_rm 26.0

Data Access

Changed:
<
<
>
>

Reading from data should be done via xrootd. The following areas are available by default:

  1. Any data already made available by the AAA federation
  2. Other areas local to your site may be exported and as well, depending on the setup.

ALERT! NOTE Data from these areas are read-only when accessed remotely through xrootd

 
Changed:
<
<

Transferring Output

>
>

XRootD Proxy Caching

To improve performance, and conserve on network I/O, reads should happen through xrootd caching proxies. Due to the flexibility of job submission, the nearest xrootd cache is not generally known in advance. Thus a convenience environment variable is provided, CMS_XROOTD_CACHE which can be used in your application to access files from xrootd.

Example AAA Access Using xrdcp

xrdcp root://${CMS_XROOTD_CACHE}//store/mc/RunIIFall15DR76/BulkGravTohhTohVVhbb_narrow_M-900_13TeV-madgraph/AODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/10000/40B50F72-5BB4-E511-A31F-001517FB1B60.root .
 

References

Revision 32016/05/26 - Main.JeffreyDost

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

UC Davis User Documentation

Line: 7 to 7
  For most of the operations: job submission and data access. It is recommended to create a proxy with the VOMS extension of your vo like this

voms-proxy-init -voms cms   

Job Submission

Changed:
<
<
This section shows the basics needed to start submitting jobs through HTCondor. For more detailed instructions about using HTCondor, please see the link to the user manual below in the References section.

Submit File

In order to submit jobs through condor, you must first write a submit file. The name of the file is arbitrary but we will call it job.condor in this document.

Example submit file:

universe = vanilla
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue

This example assumes job.condor and the test.sh executable are in the current directory, and a logs subdirectory is also already present in the current directory. Condor will create the test.log and send the job's stdout and stderr to test.out.$(Cluster).$(Process) and test.err.$(Cluster).$(Process) respectively.

Jobs can be submitted to condor using the following command:

condor_submit job.condor

Targeting Resources

The UCLHC setup allows you to chose a particular domain to run on. By default jobs will run on the slots locally in the brick, as well as in the local batch system of the site. You can further choose to run outside to all UCs and also to the SDSC Comet cluster. These are each controlled by adding special booleans to the submit file. The following table lists the flags, their defaults, and descriptions:

flag default description
+local true run on the brick
+site_local true run in your own local site batch system
+sdsc false run at Comet
+uc false run at all other UCs

Example submit file to restrict jobs to only run at SDSC and not locally:

<--/twistyPlugin twikiMakeVisibleInline-->
universe = vanilla
+local = false
+site_local = false
+sdsc = true
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never queue 
<--/twistyPlugin-->

Querying Jobs

The follwing will show a list of your jobs on the queue:

 condor_q <username>

Screen dump:

<--/twistyPlugin twikiMakeVisibleInline-->
[1627] jdost@uclhc-1 ~$ condor_q jdost


-- Submitter: uclhc-1.ps.uci.edu : <192.5.19.13:9615?sock=76988_ce0d_4> : uclhc-1.ps.uci.edu
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
  29.0   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.1   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.2   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.3   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.4   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       

5 jobs; 0 completed, 0 removed, 0 idle, 5 running, 0 held, 0 suspended
<--/twistyPlugin-->

Detailed classads can be dumped for a particular job with the -l flag:

condor_q -l $(Cluster).$(Process)

Canceling Jobs

You can cancel all of your own jobs at any time with the following:

condor_rm <username>

Or alternatively choose a specific job with the $(Cluster).$(Process) numbers, e.g.:

condor_rm 26.0
>
>
This section shows the basics needed to start submitting jobs through HTCondor. For more detailed instructions about using HTCondor, please see the link to the user manual below in the References section.

Submit File

In order to submit jobs through condor, you must first write a submit file. The name of the file is arbitrary but we will call it job.condor in this document.

Example submit file:

universe = grid
grid_resource = condor uclhc-1.tier3.ucdavis.edu 10.8.0.6
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue

This example assumes job.condor and the test.sh executable are in the current directory, and a logs subdirectory is also already present in the current directory. Condor will create the test.log and send the job's stdout and stderr to test.out.$(Cluster).$(Process) and test.err.$(Cluster).$(Process) respectively.

Jobs can be submitted to condor using the following command:

condor_submit job.condor

Targeting Resources

The UCLHC setup allows you to chose a particular domain to run on. By default jobs will run on the slots locally in the brick, as well as in the local batch system of the site. You can further choose to run outside to all UCs and also to the SDSC Comet cluster. These are each controlled by adding special booleans to the submit file. The following table lists the flags, their defaults, and descriptions:

flag default description
+local true run on the brick
+site_local true run in your own local site batch system
+sdsc false run at Comet
+uc false run at all other UCs

Example submit file to restrict jobs to only run at SDSC and not locally:

<--/twistyPlugin twikiMakeVisibleInline-->
universe = grid
grid_resource = condor uclhc-1.tier3.ucdavis.edu 10.8.0.6
+local = false
+site_local = false
+sdsc = true
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never queue 
<--/twistyPlugin-->

Querying Jobs

The follwing will show a list of your jobs on the queue:

 condor_q <username>

Screen dump:

<--/twistyPlugin twikiMakeVisibleInline-->
[1627] jdost@uclhc-1 ~$ condor_q jdost


-- Submitter: uclhc-1.ps.uci.edu : <192.5.19.13:9615?sock=76988_ce0d_4> : uclhc-1.ps.uci.edu
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
  29.0   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.1   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.2   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.3   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.4   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       

5 jobs; 0 completed, 0 removed, 0 idle, 5 running, 0 held, 0 suspended
<--/twistyPlugin-->

Detailed classads can be dumped for a particular job with the -l flag:

condor_q -l $(Cluster).$(Process)

Canceling Jobs

You can cancel all of your own jobs at any time with the following:

condor_rm <username>

Or alternatively choose a specific job with the $(Cluster).$(Process) numbers, e.g.:

condor_rm 26.0
 

Data Access

Transferring Output

Changed:
<
<

Since xrootd is configured as a read-only system, you should use the condor file transfer mechanism to transfer job output back home to the brick.

The following example assumes the test.sh executable generates an output file called test.out. This is an example of a condor submit file to make condor transfer the output back to the user data area. The relevant attributes are in bold:

<--/twistyPlugin twikiMakeVisibleInline-->
universe = vanilla
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
transfer_output_files = test.out
transfer_output_remaps = "test.out = /data/uclhc/ucd/user/jdost/test.out"
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue  
<--/twistyPlugin-->

Note that transfer_output_remaps is used here because without it, by default condor will return the output file to the working directory condor_submit was run from.

>
>
 

References

Revision 22015/09/16 - Main.EdgarHernandez

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

UC Davis User Documentation

Added:
>
>

Creating a Proxy

For most of the operations: job submission and data access. It is recommended to create a proxy with the VOMS extension of your vo like this

voms-proxy-init -voms cms   
 

Job Submission

This section shows the basics needed to start submitting jobs through HTCondor. For more detailed instructions about using HTCondor, please see the link to the user manual below in the References section.

Submit File

In order to submit jobs through condor, you must first write a submit file. The name of the file is arbitrary but we will call it job.condor in this document.

Example submit file:

universe = vanilla
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue

This example assumes job.condor and the test.sh executable are in the current directory, and a logs subdirectory is also already present in the current directory. Condor will create the test.log and send the job's stdout and stderr to test.out.$(Cluster).$(Process) and test.err.$(Cluster).$(Process) respectively.

Jobs can be submitted to condor using the following command:

condor_submit job.condor

Targeting Resources

The UCLHC setup allows you to chose a particular domain to run on. By default jobs will run on the slots locally in the brick, as well as in the local batch system of the site. You can further choose to run outside to all UCs and also to the SDSC Comet cluster. These are each controlled by adding special booleans to the submit file. The following table lists the flags, their defaults, and descriptions:

flag default description
+local true run on the brick
+site_local true run in your own local site batch system
+sdsc false run at Comet
+uc false run at all other UCs

Example submit file to restrict jobs to only run at SDSC and not locally:

<--/twistyPlugin twikiMakeVisibleInline-->
universe = vanilla
+local = false
+site_local = false
+sdsc = true
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never queue 
<--/twistyPlugin-->

Querying Jobs

The follwing will show a list of your jobs on the queue:

 condor_q <username>

Screen dump:

<--/twistyPlugin twikiMakeVisibleInline-->
[1627] jdost@uclhc-1 ~$ condor_q jdost


-- Submitter: uclhc-1.ps.uci.edu : <192.5.19.13:9615?sock=76988_ce0d_4> : uclhc-1.ps.uci.edu
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
  29.0   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.1   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.2   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.3   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.4   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       

5 jobs; 0 completed, 0 removed, 0 idle, 5 running, 0 held, 0 suspended
<--/twistyPlugin-->

Detailed classads can be dumped for a particular job with the -l flag:

condor_q -l $(Cluster).$(Process)

Canceling Jobs

You can cancel all of your own jobs at any time with the following:

condor_rm <username>

Or alternatively choose a specific job with the $(Cluster).$(Process) numbers, e.g.:

condor_rm 26.0
Line: 19 to 21
 

Revision 12015/08/22 - Main.JeffreyDost

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="WebHome"

UC Davis User Documentation

Job Submission

This section shows the basics needed to start submitting jobs through HTCondor. For more detailed instructions about using HTCondor, please see the link to the user manual below in the References section.

Submit File

In order to submit jobs through condor, you must first write a submit file. The name of the file is arbitrary but we will call it job.condor in this document.

Example submit file:

universe = vanilla
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue

This example assumes job.condor and the test.sh executable are in the current directory, and a logs subdirectory is also already present in the current directory. Condor will create the test.log and send the job's stdout and stderr to test.out.$(Cluster).$(Process) and test.err.$(Cluster).$(Process) respectively.

Jobs can be submitted to condor using the following command:

condor_submit job.condor

Targeting Resources

The UCLHC setup allows you to chose a particular domain to run on. By default jobs will run on the slots locally in the brick, as well as in the local batch system of the site. You can further choose to run outside to all UCs and also to the SDSC Comet cluster. These are each controlled by adding special booleans to the submit file. The following table lists the flags, their defaults, and descriptions:

flag default description
+local true run on the brick
+site_local true run in your own local site batch system
+sdsc false run at Comet
+uc false run at all other UCs

Example submit file to restrict jobs to only run at SDSC and not locally:

<--/twistyPlugin twikiMakeVisibleInline-->
universe = vanilla
+local = false
+site_local = false
+sdsc = true
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never queue 
<--/twistyPlugin-->

Querying Jobs

The follwing will show a list of your jobs on the queue:

 condor_q <username>

Screen dump:

<--/twistyPlugin twikiMakeVisibleInline-->
[1627] jdost@uclhc-1 ~$ condor_q jdost


-- Submitter: uclhc-1.ps.uci.edu : <192.5.19.13:9615?sock=76988_ce0d_4> : uclhc-1.ps.uci.edu
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
  29.0   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.1   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.2   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.3   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       
  29.4   jdost           8/21 16:25   0+00:01:46 R  0   0.0  test.sh 300       

5 jobs; 0 completed, 0 removed, 0 idle, 5 running, 0 held, 0 suspended
<--/twistyPlugin-->

Detailed classads can be dumped for a particular job with the -l flag:

condor_q -l $(Cluster).$(Process)

Canceling Jobs

You can cancel all of your own jobs at any time with the following:

condor_rm <username>

Or alternatively choose a specific job with the $(Cluster).$(Process) numbers, e.g.:

condor_rm 26.0

Data Access

Transferring Output

Since xrootd is configured as a read-only system, you should use the condor file transfer mechanism to transfer job output back home to the brick.

The following example assumes the test.sh executable generates an output file called test.out. This is an example of a condor submit file to make condor transfer the output back to the user data area. The relevant attributes are in bold:

<--/twistyPlugin twikiMakeVisibleInline-->
universe = vanilla
executable = test.sh
arguments = 300
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
transfer_output_files = test.out
transfer_output_remaps = "test.out = /data/uclhc/ucd/user/jdost/test.out"
log = logs/test.log
output = logs/test.out.$(Cluster).$(Process)
error = logs/test.err.$(Cluster).$(Process)
use_x509userproxy = True
notification = Never
queue  
<--/twistyPlugin-->

Note that transfer_output_remaps is used here because without it, by default condor will return the output file to the working directory condor_submit was run from.

References

<-- TWIKI VARIABLES 
  • Set CONDOR_VERS = v8.2
  • Set VO_UPPER = CMS
  • Set UC_LOWER = ucd
  • Set FED = AAA
-->

-- JeffreyDost - 2015/08/22

 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback