UCSD Physics Computing Facility (PCF) User Guide

About this Document

The UCSD Physics Computing Facility (PCF) provides access to multiple high-throughput computing resources that are made available to students, faculty, and staff in the Department of Physics as well as those in the broader scientific community at UCSD. This document describes how to get an account on PCF and begin submitting jobs to its computing resources.

Please note that this documentation is currently under construction and may not be complete in some parts.

This document follows the general Open Science Grid (OSG) documentation conventions:

  1. A User Command Line is illustrated by a green box that displays a prompt:
     [user@client ~]$ 
  2. Lines in a file are illustrated by a yellow box that displays the desired lines in a file:
     priorities=1 

System Overview

PCF is dual-socket login node with two Intel Xeon E5-2670 v3 processors, 132 GB of RAM, and 1 TB of hard drive disk space. The system is currently running CentOS 6.8 and uses HTCondor for batch job submission and resource management. PCF currently enables users to access the following computing resources:

While users may submit and run jobs locally on PCF itself, all computationally intensive jobs should generally be run only on the larger computing resources, reserving PCF's local resources for development and testing purposes only.

System Status

  • Access to Comet is currently unavailable from PCF, but it will again be available in early 2017.

User Accounts

You may obtain a user account on PCF by contacting the Physics Help Desk (helpdesk@physics.ucsd.edu). They will need your UCSD Active Directory (AD) username to create the account. Accounts are available to any UCSD student, faculty member, or staff member running scientific computing workloads.

Once your account is created, you will be able to access PCF via SSH using your AD credentials (username/password).

 [user@client ~]$ ssh youradusername@pcf-osg.t2.ucsd.edu 
 Password: ENTERYOURADPASSWORDHERE

Managing Jobs

Job Submission

PCF uses HTCondor to manage batch job submission to the high-throughput computing resources its users may access. Jobs can be submitted to PCF using the condor_submit command as follows:

 [youradusername@pcf-osg ~]$ condor_submit job.condor 

where job.condor is the name of a UNIX formatted plain ASCII file known as a submit description file. This file contains special commands, directives, expressions, statements, and variables used to specify information about your batch job to HTCondor, such as what executable to run, the files to use for standard input, standard output, and standard error, as well as the resources required to successfully run the job.

Submit Description Files

A sample HTCondor submit description file (bash_pi.condor) is shown below.

 # A sample HTCondor submit description file
 universe = vanilla
 executable = bash_pi.sh
 arguments = -b 8 -r 5 -s 10000
 should_transfer_files = YES 
 when_to_transfer_output = ON_EXIT
 output = bash_pi.out.$(ClusterId).$(ProcId)                                                                                                                
 error = bash_pi.err.$(ClusterId).$(ProcId)
 log = bash_pi.log.$(ClusterId).$(ProcId)
 request_cpus = 1 
 request_disk = 8000000
 request_memory = 1024
 +ProjectName = "PCFOSGUCSD"
 +local = FALSE
 +site_local = TRUE
 +sdsc = FALSE
 +uc = TRUE
 queue 10 

The first line here

 # A sample HTCondor submit description file 
is simply a comment line in the submit description file. Any comments in a submit description file should be placed on their own line.

Next, the universe command defines a specific type of execution environment for your job.

 universe = vanilla 
All batch jobs submitted to PCF should use the default vanilla universe.

The executable command specifies the name of the executable you want to run.

 executable = bash_pi.sh 
Only one executable command should be specified in any submit description file. If no path or a relative path is used, then the executable is presumed to be relative to the current working directory of the user when the condor_submit command was issued. In this example, the executable is a bash shell script named bash_pi.sh, which uses a simple Monte Carlo method to estimate the value of Pi.

To successfully run this example script, a user is required to provide three command-line arguments: (1) the size of integers to use in bytes, (2) the number of decimal places to round the estimate of Pi, and (3) the number of Monte Carlo samples. These command-line arguments are passed to the script in the submit description file via the arguments command.

 arguments = -b 8 -r 5 -s 10000 
Here, the argument command indicates the script should use 8-byte integers, round the estimate of Pi to 5 decimal places, and take 10000 Monte Carlo samples.

The should_transfer_files command determines if HTCondor transfers files to and from the remote machine where your job runs.

 should_transfer_files = YES 
YES will cause HTCondor to always transfer input and output files for your jobs. However, the total amount of input and output data for each job using the HTCondor file transfer mechanism should be kept to less than 5 GB to allow the data to be successfully pulled from PCF by your jobs, processed on the remote machines where they will run, and then pushed back to your home directory on PCF. If your requirements exceed this 5 GB per job limit, please consult the PCF system adminstrators to assist you with setting up an alternative file transfer mechanism.

The when_to_transfer_output command determines when HTCondor transfers your job's output files back to PCF. If when_to_transfer_output is set to ON_EXIT, HTCondor will transfer the file listed in the output command back to PCF, as well as any other files created by the job in its remote scratch directory, but only when the job exits on its own.

 when_to_transfer_output = ON_EXIT 
If when_to_transfer_output is set to ON_EXIT_OR_EVICT, then the output files are transferred back to PCF any time the job leaves a remote machine, either because it exited on its own, or was evicted by HTCondor for any reason prior to job completion. Any output files transferred back to PCF upon eviction are then automatically sent back out again as input files if the job restarts. This option is intended for fault tolerant jobs which periodically save their own state and are designed to restart where they left off.

The output and error commands provide the paths and filenames used by HTCondor to capture any output and error messages your executable would normally write to stdout and stderr. Similarly, the log command is used to provide the path and filename for the HTCondor job event log, which is a chronological list of events that occur as a job runs.

 output = pi.out.$(ClusterId).$(ProcId)
 error = pi.err.$(ClusterId).$(ProcId)
 log = pi.log.$(ClusterId).$(ProcId) 
Note that each of these commands in the sample submit description file use the $(ClusterId) and $(ProcId) variables to define the filenames. This will append the $(ClusterId) and $(ProcId) number of each HTCondor job to their respective output, error, and job event log files. This especially useful for tagging the output, error, and log files for an individual job when a submit description file is used to queue many jobs all at once.

Next in the sample submit description file are the standard resource request commands: request_cpus, request_disk, and request_memory.

 request_cpus = 1 
 request_disk = 8000000
 request_memory = 1024 
These commands tell HTCondor what resources --- CPUs in number of cores, disk in KiB (default), and memory in MiB (default) --- are required to successfully run your job. It is important to provide this information in your submit description files as accurately as possible since HTCondor will use these requirements to match your job to a machine that can provides such resources. If this information is inaccurate, your job may fail when it is matched with and attempts to run on a machine without sufficient resources. All jobs submitted to PCF should contain these request commands. In general, you may assume that any job submitted to PCF can safely use up to 8 CPU-cores, 20 GB of disk space, and 2 GB of memory per CPU-core requested. Note: You can avoid using the default units of KiB and MiB for the request_disk and request_memory commands by appending the characters K (or KB), M (or MB), G (or GB), or T (or TB) to their numerical value to indicate the units to be used.

HTCondor allows users (and system administrators) to append custom attributes to any job at the time of submission. On PCF, a set of custom attributes are used to mark jobs for special routing and accounting purposes. For example,

 +ProjectName = "PCFOSGUCSD" 
is a job attribute used by the Open Science Grid (OSG) for tracking resource usage by group. All jobs submitted to PCF, including yours, should contain this +ProjectName = "PCFOSGUCSD" attribute, unless directed otherwise.

The next set of custom job attributes in the sample submit description file

 +local = FALSE
 +site_local = TRUE
 +sdsc = FALSE
 +uc = TRUE 
are a set of boolean job routing flags that allow you to explicitly target where your jobs may run. Each one of these boolean flags is associated with one of the different computing resources accessible from PCF. When you set the value of one of these resource flags to TRUE, you permit your jobs to run on the system associated with that flag. In contrast, when you set the value of the resource flag to FALSE, you prevent your jobs from running on that system. The relationship between each job routing flag and computing resource is provided in the following table.

Job Routing Flag Default Value Computing Resource Accessibility
+local TRUE pcf-osg.t2.ucsd.edu Open to all PCF users
+site_local TRUE CMS Tier 2 Cluster Open to all PCF users
+sdsc FALSE Comet Supercomputer Open only to PCF users with an XSEDE allocation on Comet
+uc FALSE Open Science Grid Open to all PCF users

We see here that the sample submit description file has targeted the job to run either at the CMS Tier 2 Cluster or out on the Open Science Grid.

Finally, the sample submit description file ends with the queue command, which as shown here simply places an integer number of copies (10) of the job in the HTCondor queue upon submission. If no integer value is given with the queue command, the default value is 1. Every submit description file must contain at least one queue command.

requirements It is important to note here that the name of this shell script was not chosen randomly. While other batch systems like SLURM and PBS use standard shell scripts annotated with directives for both communicating the requirements of a batch job to their schedulers and how the job's executable should be run, HTCondor does not work this way. In general, an HTCondor submit description file separates the directives (or commands) to the scheduler from how the executable should be run (e.g., how it would look if run interactively from the command line). As such, it is often the case that HTCondor users will need to wrap their actual (payload) executable within a shell script as shown here in this sample submit description file. Here, that executable is represented by job.x in the transfer_input_files command.

Job Status

Once you submit a job to PCF, you can periodically check on its status by using the condor_q command. There will likely always be other user jobs in PCF's queue besides your own. Therefore, in general, you will want to issue the condor_q command by providing your username as an argument.

 [youradusername@pcf-osg ~]$ condor_q youradusername

 -- Schedd: pcf-osg.t2.ucsd.edu : <169.228.130.75:9615?...
 ID        OWNER                  SUBMITTED    RUN_TIME   ST PRI SIZE CMD               
 16663.0   youradusername         1/12 17:09   0+00:00:08 R  0   0.0  bash_pi.sh -b 8 -r
 16663.1   youradusername         1/12 17:09   0+00:00:08 R  0   0.0  bash_pi.sh -b 8 -r
 16663.2   youradusername         1/12 17:09   0+00:00:08 R  0   0.0  bash_pi.sh -b 8 -r
 16663.3   youradusername         1/12 17:09   0+00:00:08 R  0   0.0  bash_pi.sh -b 8 -r
 16663.4   youradusername         1/12 17:09   0+00:00:08 R  0   0.0  bash_pi.sh -b 8 -r
 16663.5   youradusername         1/12 17:09   0+00:00:08 R  0   0.0  bash_pi.sh -b 8 -r
 16663.6   youradusername         1/12 17:09   0+00:00:07 R  0   0.0  bash_pi.sh -b 8 -r
 16663.7   youradusername         1/12 17:09   0+00:00:07 R  0   0.0  bash_pi.sh -b 8 -r
 16663.8   youradusername         1/12 17:09   0+00:00:07 R  0   0.0  bash_pi.sh -b 8 -r
 16663.9   youradusername         1/12 17:09   0+00:00:07 R  0   0.0  bash_pi.sh -b 8 -r
 16664.0   youradusername         1/12 17:09   0+00:00:00 I  0   0.0  bash_pi.sh -b 8 -r
 16664.1   youradusername         1/12 17:09   0+00:00:00 I  0   0.0  bash_pi.sh -b 8 -r
 16664.2   youradusername         1/12 17:09   0+00:00:00 I  0   0.0  bash_pi.sh -b 8 -r
 16664.3   youradusername         1/12 17:09   0+00:00:00 I  0   0.0  bash_pi.sh -b 8 -r
 16664.4   youradusername         1/12 17:09   0+00:00:00 I  0   0.0  bash_pi.sh -b 8 -r
 16664.5   youradusername         1/12 17:09   0+00:00:00 I  0   0.0  bash_pi.sh -b 8 -r
 16664.6   youradusername         1/12 17:09   0+00:00:00 I  0   0.0  bash_pi.sh -b 8 -r
 16664.7   youradusername         1/12 17:09   0+00:00:00 I  0   0.0  bash_pi.sh -b 8 -r
 16664.8   youradusername         1/12 17:09   0+00:00:00 I  0   0.0  bash_pi.sh -b 8 -r
 16664.9   youradusername         1/12 17:09   0+00:00:00 I  0   0.0  bash_pi.sh -b 8 -r

 20 jobs; 0 completed, 0 removed, 10 idle, 10 running, 0 held, 0 suspended 

This will limit the job status information returned by condor_q to your jobs only. You may also limit the job status information to a particular subset of jobs you're interested in by providing the ClusterId of the subset as an argument to condor_q.

 [youradusername@pcf-osg ~]$ condor_q 16663

 -- Schedd: pcf-osg.t2.ucsd.edu : <169.228.130.75:9615?...
 ID        OWNER                  SUBMITTED    RUN_TIME   ST PRI SIZE CMD               
 16663.0   youradusername         1/12 17:09   0+00:03:25 R  0   0.0  bash_pi.sh -b 8 -r
 16663.1   youradusername         1/12 17:09   0+00:03:25 R  0   0.0  bash_pi.sh -b 8 -r
 16663.2   youradusername         1/12 17:09   0+00:03:25 R  0   0.0  bash_pi.sh -b 8 -r
 16663.3   youradusername         1/12 17:09   0+00:03:25 R  0   0.0  bash_pi.sh -b 8 -r
 16663.4   youradusername         1/12 17:09   0+00:03:25 R  0   0.0  bash_pi.sh -b 8 -r
 16663.5   youradusername         1/12 17:09   0+00:03:25 R  0   0.0  bash_pi.sh -b 8 -r
 16663.6   youradusername         1/12 17:09   0+00:03:24 R  0   0.0  bash_pi.sh -b 8 -r
 16663.7   youradusername         1/12 17:09   0+00:03:24 R  0   0.0  bash_pi.sh -b 8 -r
 16663.8   youradusername         1/12 17:09   0+00:03:24 R  0   0.0  bash_pi.sh -b 8 -r
 16663.9   youradusername         1/12 17:09   0+00:03:24 R  0   0.0  bash_pi.sh -b 8 -r

 10 jobs; 0 completed, 0 removed, 0 idle, 10 running, 0 held, 0 suspended 

The status of each submitted job in the queue is provided in the column labeled ST in the standard output of the condor_q command. In general, you will only find 3 different job status codes in this column, namely:

  • R: The job is currently running.
  • I: The job is idle. It is not running right now, because it is waiting for a machine to become available.
  • H: The job is the held state. In the held state, the job will not be scheduled to run until it is released.

If your job is running (R), you probably don't have anything to worry about. However, if the job has been idle (I) for an unusually long period of time or is found in the held (H) state, you may want to investigate why your job is not running before contacting the PCF system administrators for additional help.

If you find your job in the held state (H)

 [youradusername@pcf-osg ~]$ condor_q 16663.3

 -- Schedd: pcf-osg.t2.ucsd.edu : <169.228.130.75:9615?...
 ID        OWNER                  SUBMITTED    RUN_TIME   ST PRI SIZE CMD               
 16663.3   youradusername         1/12 17:09   0+00:56:56 H  0   26.9 bash_pi.sh -b 8 -r

 1 jobs; 0 completed, 0 removed, 0 idle, 0 running, 1 held, 0 suspended 

you can check the hold reason by appending the -held option to the condor_q command.

 [youradusername@pcf-osg ~]$ condor_q 16663.3 -held

 -- Schedd: pcf-osg.t2.ucsd.edu : <169.228.130.75:9615?...
 ID       OWNER                  HELD_SINCE HOLD_REASON                                
 16663.3  youradusername         1/12 18:06 via condor_hold (by user youradusername)          

 1 jobs; 0 completed, 0 removed, 0 idle, 0 running, 1 held, 0 suspended 

In this case, for some reason you placed the job on hold using the condor_hold command. However, if you find a more unusual HOLD_REASON given and are unable to resolve the issue yourself, please contact the PCF system administrators to help you investigate the problem.

If instead you find that your job has been sitting idle (I) for an unusually long period of time, you can run condor_q with the -analyze (or -better-analyze) option to attempt to diagnose the problem.

 [youradusername@pcf-osg ~]$ condor_q -analyze 16250.0

 -- Schedd: pcf-osg.t2.ucsd.edu : <169.228.130.75:9615?...
 User priority for youradusername@pcf-osg.t2.ucsd.edu is not available, attempting to analyze without it.
 ---
 16250.000:  Run analysis summary.  Of 20 machines,
      19 are rejected by your job's requirements 
       1 reject your job because of their own requirements 
       0 match and are already running your jobs 
       0 match but are serving other users 
       0 are available to run your job
  	No successful match recorded.
 	Last failed match: Thu Jan 12 18:45:36 2017

	Reason for last match failure: no match found 

 The Requirements expression for your job is:

    ( TARGET.Arch == "X86_64" ) && ( TARGET.OpSys == "LINUX" ) &&
    ( TARGET.Disk >= RequestDisk ) && ( TARGET.Memory >= RequestMemory ) &&
    ( TARGET.Cpus >= RequestCpus ) && ( TARGET.HasFileTransfer )

 Suggestions:

    Condition                         Machines Matched    Suggestion
    ---------                         ----------------    ----------
 1   ( TARGET.Memory >= 16384 )        1                    
 2   ( TARGET.Cpus >= 8 )              1                    
 3   ( TARGET.Arch == "X86_64" )       20                   
 4   ( TARGET.OpSys == "LINUX" )       20                   
 5   ( TARGET.Disk >= 1 )              20                   
 6   ( TARGET.HasFileTransfer )        20                   

 The following attributes should be added or modified:

 Attribute               Suggestion
 ---------               ----------
 local                   change to undefined 

Again, if you are unable to resolve the issue yourself, please contact the PCF system administrators to help you investigate the problem.

Job Removal

Occasionally, you may need remove a job that has already been submitted to the PCF queue. For example, maybe the job has been misconfigured in some way or goes held for some reason. To remove a job in the queue, you can use the condor_rm command. To remove a job from the queue, provide the both the ClusterId and ProcId of the job you would like to remove.

 [youradusername@pcf-osg ~]$ condor_q youradusername

 -- Schedd: pcf-osg.t2.ucsd.edu : <169.228.130.75:9615?...
 ID        OWNER                  SUBMITTED    RUN_TIME   ST PRI SIZE CMD               
 16665.0   youradusername         1/13 08:55   0+01:24:38 R  0   122.1 bash_pi.sh -b 8 -r
 16665.1   youradusername         1/13 08:55   0+01:24:38 R  0   26.9 bash_pi.sh -b 8 -r
 16665.2   youradusername         1/13 08:55   0+01:24:38 R  0   26.9 bash_pi.sh -b 8 -r
 16665.3   youradusername         1/13 08:55   0+01:24:38 R  0   26.9 bash_pi.sh -b 8 -r
 16665.4   youradusername         1/13 08:55   0+01:24:38 R  0   26.9 bash_pi.sh -b 8 -r
 16665.5   youradusername         1/13 08:55   0+01:24:38 R  0   26.9 bash_pi.sh -b 8 -r
 16665.6   youradusername         1/13 08:55   0+01:24:37 R  0   26.9 bash_pi.sh -b 8 -r
 16665.7   youradusername         1/13 08:55   0+01:24:37 R  0   26.9 bash_pi.sh -b 8 -r
 16665.8   youradusername         1/13 08:55   0+01:24:37 R  0   26.9 bash_pi.sh -b 8 -r
 16665.9   youradusername         1/13 08:55   0+01:24:37 R  0   26.9 bash_pi.sh -b 8 -r

 10 jobs; 0 completed, 0 removed, 0 idle, 10 running, 0 held, 0 suspended

 [youradusername@pcf-osg ~]$ condor_rm 16665.0 16665.2 16665.4 16665.6 16665.8

 Job 16665.0 marked for removal
 Job 16665.2 marked for removal
 Job 16665.4 marked for removal
 Job 16665.6 marked for removal
 Job 16665.8 marked for removal
 
 [youradusername@pcf-osg ~]$ condor_q youradusername

 -- Schedd: pcf-osg.t2.ucsd.edu : <169.228.130.75:9615?...
 ID        OWNER                  SUBMITTED    RUN_TIME   ST PRI SIZE CMD               
 16665.1   youradusername         1/13 08:55   0+01:26:04 R  0   26.9 bash_pi.sh -b 8 -r
 16665.3   youradusername         1/13 08:55   0+01:26:04 R  0   26.9 bash_pi.sh -b 8 -r
 16665.5   youradusername         1/13 08:55   0+01:26:04 R  0   26.9 bash_pi.sh -b 8 -r
 16665.7   youradusername         1/13 08:55   0+01:26:03 R  0   26.9 bash_pi.sh -b 8 -r
 16665.9   youradusername         1/13 08:55   0+01:26:03 R  0   26.9 bash_pi.sh -b 8 -r

 5 jobs; 0 completed, 0 removed, 0 idle, 5 running, 0 held, 0 suspended 

However, if you need to remove a whole cluster of jobs, then just use the ClusterId of the jobs.

Job History

Available Software

Environment modules provide users with an easy way to access different versions of software and to access various libraries, compilers, and software. All user jobs running on computing resources accessible to PCF should have access to the

OSG has implemented a version based on Lmod to provide the typical module commands on any site in the OSG. You can test workflows on the OSG Connect login node and then submit the same workflow without any changes.

The Environment Modules package provides for dynamic modification of your shell environment. Module commands set, change, or delete environment variables, typically in support of a particular application. They also let the user choose between different versions of the same software or different combinations of related codes.

he Module package provides for the dynamic modification of a users's environment via module files. Module can be used:

to manage necessary changes to the environment, such as changing the default path or defining environment variables to manage multiple versions of applications, tools and libraries to manage software where name conflicts with other software would cause problems Modules have been created for many of the software packages installed on PSC systems. They make your job easier by defining environment variables and adding directories to your path which are necessary when using a given package.

OSG computing environment.

Environment modules have historically been used in HPC environments to provide users with an easy way to access different versions of software and to access various libraries, compilers, and software (c.f. the wikipedia reference). OSG has implemented a version based on Lmod to provide the typical module commands on any site in the OSG. You can test workflows on the OSG Connect login node and then submit the same workflow without any changes.

Loading and Unloading Modules

You must remove some modules before loading others. Some modules depend on others, so they may be loaded or unloaded as a consequence of another module command. For example, if intel and mvapich are both loaded, running the command module unload intel will automatically unload mvapich. Subsequently issuing the module load intel command does not automatically reload mvapich.

If you find yourself regularly using a set of module commands, you may want to add these to your configuration files (".bashrc" for bash users, ".cshrc" for C shell users). Complete documentation is available in the module(1) and modulefile(4) manpages.

Modules

TACC continually updates application packages, compilers, communications libraries, tools, and math libraries. To facilitate this task and to provide a uniform mechanism for accessing different revisions of software, TACC uses the modules utility.

At login, modules commands set up a basic environment for the default compilers, tools, and libraries. For example: the $PATH, $MANPATH, $LIBPATH environment variables, directory locations (e.g., $WORK, $HOME), aliases (e.g., cdw, cdh) and license paths are set by the login modules. Therefore, there is no need for you to set them or update them when updates are made to system and application software.

Users that require 3rd party applications, special libraries, and tools for their projects can quickly tailor their environment with only the applications and tools they need. Using modules to define a specific application environment allows you to keep your environment free from the clutter of all the application environments you don't need.

The environment for executing each major TACC application can be set with a module command. The specifics are defined in a modulefile file, which sets, unsets, appends to, or prepends to environment variables (e.g., $PATH, $LD_LIBRARY_PATH, $INCLUDE_PATH, $MANPATH) for the specific application. Each modulefile also sets functions or aliases for use with the application. You only need to invoke a single command to configure the application/programming environment properly. The general format of this command is:

module load modulename where modulename is the name of the module to load. If you often need a specific application, see Controlling Modules Loaded at Login below for details.

Most of the package directories are in /opt/apps/ ($APPS) and are named after the package. In each package directory there are subdirectories that contain the specific versions of the package.

As an example, the fftw3 package requires several environment variables that point to its home, libraries, include files, and documentation. These can be set in your shell environment by loading the fftw3 module:

login1$ module load fftw3

To look at a synopsis about using an application in the module's environment (in this case, fftw3), or to see a list of currently loaded modules, execute the following commands:

login1$ module help fftw3 login1$ module list Available Modules

TACC's module system is organized hierarchically to prevent users from loading software that will not function properly with the currently loaded compiler/MPI environment (configuration). Two methods exist for viewing the availability of modules: Looking at modules available with the currently loaded compiler/MPI, and looking at all of the modules installed on the system.

To see a list of modules available to the user with the current compiler/MPI configuration, users can execute the following command:

login1$ module avail This will allow the user to see which software packages are available with the current compiler/MPI configuration.

To see a list of modules available to the user with any compiler/MPI configuration, users can execute the following command:

login1$ module spider This command will display all available packages on the system. To get specific information about a particular package, including the possible compiler/MPI configurations for that package, execute the following command:

login1$ module spider modulename

Some useful module commands are:

module avail lists all the available modules module help foo displays help on module foo module display foo indicates what changes would be made to the environment by loading module foo without actually loading it module load foo loads module foo module list displays your currently loaded modules module swap foo1 foo2 switches loaded module foo1 with module foo2 module unload foo reverses all changes to the environment made by previously loading module foo

Special Instructions

Running Jobs on Comet

Running Jobs on Amazon Web Services

condor_annex is a Perl-based script that utilizes the AWS command-line interface and other AWS services to orchestrate the delivery of HTCondor execute nodes to an HTCondor pool like the one available to you on pcf-osg.t2.ucsd.edu. If you would like to try running your jobs on AWS resources, please contact Marty Kandes at mkandes@sdsc.edu. Some backend configuration of your AWS account will be necessary to get started. However, once your AWS account is configured, you will be able to order instances on-demand with one command:

condor_annex \
   --project-id "$AWS_PROJECT_ID" \
   --region "$AWS_DEFAULT_REGION" \
   --central-manager "$AWS_CENTRAL_MANAGER"
   --vpc "$AWS_VPC_ID" \
   --subnet "$AWS_SUBNET_ID" \
   --keypair "$AWS_KEY_PAIR_NAME" \
   --instances $NUMBER_OF_INSTANCES_TO_ORDER \
   --expiry "$AWS_LEASE_EXPIRATION" \
   --password-file "$CONDOR_PASSWORD_FILE" \
   --image-ids "$AWS_AMI_ID" \
   --instance-types "$AWS_INSTANCE_TYPE" \
   --spot-prices $AWS_SPOT_BID \
   --config-file $AWS_USER_CONFIG"

Contact Information

  • Physics Help Desk
  • PCF System Administrators:

Additional Documentation

  • bash_pi.condor: A sample HTCondor submit description file
  • bash_pi.sh: A bash script that uses a simple Monte Carlo method to estimate the value of Pi
Topic attachments
I Attachment Action Size Date Who Comment
elsecondor bash_pi.condor manage 0.5 K 2017/01/11 - 01:01 MartinKandes A sample HTCondor submit description file
shsh bash_pi.sh manage 1.7 K 2017/01/11 - 01:01 MartinKandes A bash script uses a simple Monte Carlo method to estimate the value of Pi
Topic revision: r19 - 2017/01/25 - 19:09:48 - MartinKandes
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback