TWiki> UCLHCWeb Web>UCSDUserDocPCF (revision 16)EditAttach

UCSD Physics Computing Facility (PCF) User Guide

About this Document

The UCSD Physics Computing Facility (PCF) provides access to multiple high-throughput computing resources that are made available to students, faculty, and staff in the Department of Physics as well as those in the broader scientific community at UCSD. This document describes how to get an account on PCF and begin submitting jobs to its computing resources.

This document follows the general Open Science Grid (OSG) documentation conventions:

  1. A User Command Line is illustrated by a green box that displays a prompt:
     [user@client ~]$ 
  2. Lines in a file are illustrated by a yellow box that displays the desired lines in a file:
     priorities=1 

System Overview

PCF is dual-socket login node with two Intel Xeon E5-2670 v3 processors, 132 GB of RAM, and 1 TB of hard drive disk space. The system is currently running CentOS 6.8 and uses HTCondor for batch job submission and resource management. PCF currently enables users to access the following computing resources:

While users may submit and run jobs locally on PCF itself, all computationally intensive jobs should generally be run only on the larger computing resources, reserving PCF's local resources for development and testing purposes only.

System Status

  • Access to Comet is currently unavailable from PCF, but it will again be available in early 2017.

User Accounts

You may obtain a user account on PCF by contacting the Physics Help Desk (helpdesk@physics.ucsd.edu). They will need your UCSD Active Directory (AD) username to create the account. Accounts are available to any UCSD student, faculty member, or staff member running scientific computing workloads.

Once your account is created, you will be able to access PCF via SSH using your AD credentials (username/password).

 [user@client ~]$ ssh youradusername@pcf-osg.t2.ucsd.edu 
 Password: ENTERYOURADPASSWORDHERE

Managing Jobs with HTCondor

Job Submission

PCF uses HTCondor to manage batch job submission to the high-throughput computing resources its users may access. Jobs can be submitted to PCF using the condor_submit command as follows:

 [youradusername@pcf-osg ~]$ condor_submit job.condor 

where job.condor is the name of a UNIX formatted plain ASCII file known as a submit description file. This file contains special commands, directives, expressions, statements, and variables used to specify information about your batch job to HTCondor, such as what executable to run, the files to use for standard input, standard output, and standard error, as well as the resources required to successfully run the job.

Submit Description Files

A sample HTCondor submit description file (bash_pi.condor) is shown below.

 # A sample HTCondor submit description file
 universe = vanilla
 executable = bash_pi.sh
 arguments = -b 8 -r 5 -s 10000
 should_transfer_files = YES 
 when_to_transfer_output = ON_EXIT
 output = bash_pi.out.$(ClusterId).$(ProcId)                                                                                                                
 error = bash_pi.err.$(ClusterId).$(ProcId)
 log = bash_pi.log.$(ClusterId).$(ProcId)
 request_cpus = 1 
 request_disk = 8000000
 request_memory = 1024
 +ProjectName = "PCFOSGUCSD"
 +local = TRUE
 +site_local = FALSE
 +sdsc = FALSE
 +uc = FALSE
 queue 10 

The first line here

 # A sample HTCondor submit description file 
is simply a comment line in the submit description file. Any comments in a submit description file should be placed on their own line.

Next, the universe command defines a specific type of execution environment for your job.

 universe = vanilla 
All batch jobs submitted to PCF should use the default vanilla universe.

The executable command specifies the name of the executable you want to run.

 executable = bash_pi.sh 
Only one executable command should be specified in any submit description file. If no path or a relative path is used, then the executable is presumed to be relative to the current working directory of the user when the condor_submit command was issued. In this example, the executable is a bash shell script named bash_pi.sh, which uses a simple Monte Carlo method to estimate the value of Pi.

To successfully run this example script, a user is required to provide three command-line arguments: (1) the size of integers to use in bytes, (2) the number of decimal places to round the estimate of Pi, and (3) the number of Monte Carlo samples. These command-line arguments are passed to the script in the submit description file via the arguments command.

 arguments = -b 8 -r 5 -s 10000 
Here, the argument command indicates the script should use 8-byte integers, round the estimate of Pi to 5 decimal places, and take 10000 Monte Carlo samples.

The should_transfer_files command determines if HTCondor transfers files to and from the remote machine where your job runs.

 should_transfer_files = YES 
YES will cause HTCondor to always transfer input and output files for your jobs. However, the total amount of input and output data for each job using the HTCondor file transfer mechanism should be kept to less than 5 GB to allow the data to be successfully pulled from PCF by your jobs, processed on the remote machines where they will run, and then pushed back to your home directory on PCF. If your requirements exceed this 5 GB per job limit, please consult the PCF system adminstrators to assist you with setting up an alternative file transfer mechanism.

The when_to_transfer_output command determines when HTCondor transfers your job's output files back to PCF. If when_to_transfer_output is set to ON_EXIT, HTCondor will transfer the file listed in the output command back to PCF, as well as any other files created by the job in its remote scratch directory, but only when the job exits on its own.

 when_to_transfer_output = ON_EXIT 
If when_to_transfer_output is set to ON_EXIT_OR_EVICT, then the output files are transferred back to PCF any time the job leaves a remote machine, either because it exited on its own, or was evicted by HTCondor for any reason prior to job completion. Any output files transferred back to PCF upon eviction are then automatically sent back out again as input files if the job restarts. This option is intended for fault tolerant jobs which periodically save their own state and are designed to restart where they left off.

The output and error commands provide the paths and filenames used by HTCondor to capture any output and error messages your executable would normally write to stdout and stderr. Similarly, the log command is used to provide the path and filename for the HTCondor job event log, which is a chronological list of events that occur as a job runs.

 output = pi.out.$(ClusterId).$(ProcId)
 error = pi.err.$(ClusterId).$(ProcId)
 log = pi.log.$(ClusterId).$(ProcId) 
Note that each of these commands in the sample submit description file use the $(ClusterId) and $(ProcId) variables to define the filenames. This will append the $(ClusterId) and $(ProcId) number of each HTCondor job to their respective output, error, and job event log files. This especially useful for tagging the output, error, and log files for an individual job when a submit description file is used to queue many jobs all at once.

Next in the sample submit description file are the standard resource request commands: request_cpus, request_disk, and request_memory.

 request_cpus = 1 
 request_disk = 8000000
 request_memory = 1024 
These commands tell HTCondor what resources --- CPUs in number of cores, disk in KiB (default), and memory in MiB (default) --- are required to successfully run your job. It is important to provide this information in your submit description files as accurately as possible since HTCondor will use these requirements to match your job to a machine that can provides such resources. Otherwise, your job may fail when it is matched with and attempts to run on a machine without sufficient resources. All jobs submitted to PCF should contain these request commands. In general, you may assume that any job submitted to PCF can safely use up to 8 CPU-cores, 20 GB of disk space, and 2 GB of memory per CPU-core requested. Note: You can avoid using the default units of KiB and MiB for the request_disk and request_memory commands by appending the characters K (or KB), M (or MB), G (or GB), or T (or TB) to their numerical value to indicate the units to be used.

HTCondor allows users (and system administrators) to append custom attributes to any job at the time of submission. On PCF, some of these custom attributes are used to mark jobs for special routing and accounting purposes. For example,

 +ProjectName = "PCFOSGUCSD" 
is a job attribute used by the Open Science Grid (OSG) for tracking resource usage by group. All jobs submitted to PCF, including yours, should contain this +ProjectName = "PCFOSGUCSD" attribute, unless directed otherwise.

The next set of custom job attributes in the sample submit description file

 +local = TRUE
 +site_local = FALSE
 +sdsc = FALSE
 +uc = FALSE 
are a set of boolean job routing flags that allow you to explicitly target where your jobs may run. Each one of these boolean flags is associated with one of the different computing resources accessible from PCF. When you set the value of one of these resource flags to TRUE, you permit your jobs to run on the system associated with that flag. In contrast, when you set the value of the resource flag to FALSE, you prevent your jobs from running on that system. The relationship between each job routing flag and computing resource is provided in the following table.

Job Routing Flag Default Value Computing Resource Accessibility
+local TRUE pcf-osg.t2.ucsd.edu Open to all PCF users
+site_local TRUE CMS Tier 2 Cluster Open to all PCF users
+sdsc FALSE Comet Supercomputer Open only to PCF users with an XSEDE allocation on Comet
+uc FALSE Open Science Grid Open to all PCF users

As such, we see here that the sample submit description file is only targeted to run the job locally on PCF itself.

Finally, the sample submit description file ends with the queue command, which in the form shown here simply places an integer number of copies (10) of the job in the HTCondor queue upon submission. If no integer value is given with the queue command, the default value is 1. Every submit description file must contain at least one queue command.

requirements

It is important to note here that the name of this shell script was not chosen randomly. While other batch systems like SLURM and PBS use standard shell scripts annotated with directives for both communicating the requirements of a batch job to their schedulers and how the job's executable should be run, HTCondor does not work this way. In general, an HTCondor submit description file separates the directives (or commands) to the scheduler from how the executable should be run (e.g., how it would look if run interactively from the command line). As such, it is often the case that HTCondor users will need to wrap their actual (payload) executable within a shell script as shown here in this sample submit description file. Here, that executable is represented by job.x in the transfer_input_files command.

Job Status

Once you submit a job to PCF, you can periodically check on its status by using the condor_q command. There will likely always be other user jobs in the queue besides your own. Therefore, in general, you will want to issue the command by providing your username as an argument.

 [youradusername@pcf-osg ~]$ condor_q youradusername

 -- Schedd: pcf-osg.t2.ucsd.edu : <169.228.130.75:9615?...
 ID        OWNER                  SUBMITTED    RUN_TIME   ST PRI SIZE CMD               
 16661.0   youradusername         1/12 14:51   0+00:00:04 R  0   0.0  pi.sh -b 8 -r 7 -s
 16661.1   youradusername         1/12 14:51   0+00:00:04 R  0   0.0  pi.sh -b 8 -r 7 -s
 16661.2   youradusername         1/12 14:51   0+00:00:04 R  0   0.0  pi.sh -b 8 -r 7 -s
 16661.3   youradusername         1/12 14:51   0+00:00:04 R  0   0.0  pi.sh -b 8 -r 7 -s
 16661.4   youradusername         1/12 14:51   0+00:00:03 R  0   0.0  pi.sh -b 8 -r 7 -s
 16661.5   youradusername         1/12 14:51   0+00:00:03 R  0   0.0  pi.sh -b 8 -r 7 -s
 16661.6   youradusername         1/12 14:51   0+00:00:03 R  0   0.0  pi.sh -b 8 -r 7 -s
 16661.7   youradusername         1/12 14:51   0+00:00:03 R  0   0.0  pi.sh -b 8 -r 7 -s
 16661.8   youradusername         1/12 14:51   0+00:00:03 R  0   0.0  pi.sh -b 8 -r 7 -s
 16661.9   youradusername         1/12 14:51   0+00:00:03 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.0   youradusername         1/12 14:51   0+00:00:03 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.1   youradusername         1/12 14:51   0+00:00:03 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.2   youradusername         1/12 14:51   0+00:00:03 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.3   youradusername         1/12 14:51   0+00:00:02 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.4   youradusername         1/12 14:51   0+00:00:02 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.5   youradusername         1/12 14:51   0+00:00:02 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.6   youradusername         1/12 14:51   0+00:00:02 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.7   youradusername         1/12 14:51   0+00:00:02 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.8   youradusername         1/12 14:51   0+00:00:01 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.9   youradusername         1/12 14:51   0+00:00:01 R  0   0.0  pi.sh -b 8 -r 7 -s

 20 jobs; 0 completed, 0 removed, 0 idle, 20 running, 0 held, 0 suspended 

This will limit the status information returned condor_q to your user jobs only. However, if there is a particular subset of your jobs you're interested in checking up on, you can also limit the status information by providing the specific job ClusterId as an argument to condor_q.

 [youradusername@pcf-osg ~]$ condor_q 16662

 -- Schedd: pcf-osg.t2.ucsd.edu : <169.228.130.75:9615?...
  ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
 16662.0   mkandes         1/12 14:51   0+00:01:53 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.1   mkandes         1/12 14:51   0+00:01:53 R  0   0.0  pi.sh -b 8 -r 7 -s 
 16662.2   mkandes         1/12 14:51   0+00:01:53 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.3   mkandes         1/12 14:51   0+00:01:52 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.4   mkandes         1/12 14:51   0+00:01:52 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.5   mkandes         1/12 14:51   0+00:01:52 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.6   mkandes         1/12 14:51   0+00:01:52 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.7   mkandes         1/12 14:51   0+00:01:52 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.8   mkandes         1/12 14:51   0+00:01:51 R  0   0.0  pi.sh -b 8 -r 7 -s
 16662.9   mkandes         1/12 14:51   0+00:01:51 R  0   0.0  pi.sh -b 8 -r 7 -s

 10 jobs; 0 completed, 0 removed, 0 idle, 10 running, 0 held, 0 suspended 

mkandes@pcf-osg ~$ condor_q 16662.4 -l | less

MATCH_EXP_JOB_GLIDEIN_Entry_Name = "Unknown" MATCH_EXP_JOB_GLIDEIN_Schedd = "Unknown" MaxHosts = 1 MATCH_EXP_JOBGLIDEIN_ResourceName = "UCSD" User = "mkandes@pcf-osg.t2.ucsd.edu" EncryptExecuteDirectory = false MATCH_GLIDEIN_ClusterId = "Unknown" OnExitHold = false CoreSize = 0 JOB_GLIDEIN_SiteWMS = "$$(GLIDEIN_SiteWMS:Unknown)" MATCH_GLIDEIN_Factory = "Unknown" MachineAttrCpus0 = 1 WantRemoteSyscalls = false MyType = "Job" Rank = 0.0 CumulativeSuspensionTime = 0 MinHosts = 1 MATCH_EXP_JOB_GLIDEIN_SiteWMS_Slot = "Unknown" PeriodicHold = false PeriodicRemove = false Err = "pi.err.16662.4" ProcId = 4

-analyze

Job Removal

[1514] mkandes@pcf-osg ~$ condor_rm 16662.4 Job 16662.4 marked for removal

condor_q 16662

-- Schedd: pcf-osg.t2.ucsd.edu : <169.228.130.75:9615?... ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 16662.0 mkandes 1/12 14:51 0+00:23:04 R 0 26.9 pi.sh -b 8 -r 7 -s 16662.1 mkandes 1/12 14:51 0+00:23:04 R 0 26.9 pi.sh -b 8 -r 7 -s 16662.2 mkandes 1/12 14:51 0+00:23:04 R 0 26.9 pi.sh -b 8 -r 7 -s 16662.3 mkandes 1/12 14:51 0+00:23:03 R 0 26.9 pi.sh -b 8 -r 7 -s 16662.5 mkandes 1/12 14:51 0+00:23:03 R 0 26.9 pi.sh -b 8 -r 7 -s 16662.6 mkandes 1/12 14:51 0+00:23:03 R 0 26.9 pi.sh -b 8 -r 7 -s 16662.7 mkandes 1/12 14:51 0+00:23:03 R 0 26.9 pi.sh -b 8 -r 7 -s 16662.8 mkandes 1/12 14:51 0+00:23:02 R 0 26.9 pi.sh -b 8 -r 7 -s 16662.9 mkandes 1/12 14:51 0+00:23:02 R 0 26.9 pi.sh -b 8 -r 7 -s

9 jobs; 0 completed, 0 removed, 0 idle, 9 running, 0 held, 0 suspended

Job History

Available Software

Environment modules provide users with an easy way to access different versions of software and to access various libraries, compilers, and software. All user jobs running on computing resources accessible to PCF should have access to the

OSG has implemented a version based on Lmod to provide the typical module commands on any site in the OSG. You can test workflows on the OSG Connect login node and then submit the same workflow without any changes.

The Environment Modules package provides for dynamic modification of your shell environment. Module commands set, change, or delete environment variables, typically in support of a particular application. They also let the user choose between different versions of the same software or different combinations of related codes.

he Module package provides for the dynamic modification of a users's environment via module files. Module can be used:

to manage necessary changes to the environment, such as changing the default path or defining environment variables to manage multiple versions of applications, tools and libraries to manage software where name conflicts with other software would cause problems Modules have been created for many of the software packages installed on PSC systems. They make your job easier by defining environment variables and adding directories to your path which are necessary when using a given package.

OSG computing environment.

Environment modules have historically been used in HPC environments to provide users with an easy way to access different versions of software and to access various libraries, compilers, and software (c.f. the wikipedia reference). OSG has implemented a version based on Lmod to provide the typical module commands on any site in the OSG. You can test workflows on the OSG Connect login node and then submit the same workflow without any changes.

Loading and Unloading Modules

You must remove some modules before loading others. Some modules depend on others, so they may be loaded or unloaded as a consequence of another module command. For example, if intel and mvapich are both loaded, running the command module unload intel will automatically unload mvapich. Subsequently issuing the module load intel command does not automatically reload mvapich.

If you find yourself regularly using a set of module commands, you may want to add these to your configuration files (".bashrc" for bash users, ".cshrc" for C shell users). Complete documentation is available in the module(1) and modulefile(4) manpages.

Modules

TACC continually updates application packages, compilers, communications libraries, tools, and math libraries. To facilitate this task and to provide a uniform mechanism for accessing different revisions of software, TACC uses the modules utility.

At login, modules commands set up a basic environment for the default compilers, tools, and libraries. For example: the $PATH, $MANPATH, $LIBPATH environment variables, directory locations (e.g., $WORK, $HOME), aliases (e.g., cdw, cdh) and license paths are set by the login modules. Therefore, there is no need for you to set them or update them when updates are made to system and application software.

Users that require 3rd party applications, special libraries, and tools for their projects can quickly tailor their environment with only the applications and tools they need. Using modules to define a specific application environment allows you to keep your environment free from the clutter of all the application environments you don't need.

The environment for executing each major TACC application can be set with a module command. The specifics are defined in a modulefile file, which sets, unsets, appends to, or prepends to environment variables (e.g., $PATH, $LD_LIBRARY_PATH, $INCLUDE_PATH, $MANPATH) for the specific application. Each modulefile also sets functions or aliases for use with the application. You only need to invoke a single command to configure the application/programming environment properly. The general format of this command is:

module load modulename where modulename is the name of the module to load. If you often need a specific application, see Controlling Modules Loaded at Login below for details.

Most of the package directories are in /opt/apps/ ($APPS) and are named after the package. In each package directory there are subdirectories that contain the specific versions of the package.

As an example, the fftw3 package requires several environment variables that point to its home, libraries, include files, and documentation. These can be set in your shell environment by loading the fftw3 module:

login1$ module load fftw3

To look at a synopsis about using an application in the module's environment (in this case, fftw3), or to see a list of currently loaded modules, execute the following commands:

login1$ module help fftw3 login1$ module list Available Modules

TACC's module system is organized hierarchically to prevent users from loading software that will not function properly with the currently loaded compiler/MPI environment (configuration). Two methods exist for viewing the availability of modules: Looking at modules available with the currently loaded compiler/MPI, and looking at all of the modules installed on the system.

To see a list of modules available to the user with the current compiler/MPI configuration, users can execute the following command:

login1$ module avail This will allow the user to see which software packages are available with the current compiler/MPI configuration.

To see a list of modules available to the user with any compiler/MPI configuration, users can execute the following command:

login1$ module spider This command will display all available packages on the system. To get specific information about a particular package, including the possible compiler/MPI configurations for that package, execute the following command:

login1$ module spider modulename

Some useful module commands are:

module avail lists all the available modules module help foo displays help on module foo module display foo indicates what changes would be made to the environment by loading module foo without actually loading it module load foo loads module foo module list displays your currently loaded modules module swap foo1 foo2 switches loaded module foo1 with module foo2 module unload foo reverses all changes to the environment made by previously loading module foo

condor_annex is a Perl-based script that utilizes the AWS command-line interface and other AWS services to orchestrate the delivery of HTCondor execute nodes to an HTCondor pool like the one available to you on pcf-osg.t2.ucsd.edu. If you would like to try running your jobs on AWS resources, please contact Marty Kandes at mkandes@sdsc.edu. Some backend configuration of your AWS account will be necessary to get started. However, once your AWS account is configured, you will be able to order instances on-demand with one command:

condor_annex \
   --project-id "$AWS_PROJECT_ID" \
   --region "$AWS_DEFAULT_REGION" \
   --central-manager "$AWS_CENTRAL_MANAGER"
   --vpc "$AWS_VPC_ID" \
   --subnet "$AWS_SUBNET_ID" \
   --keypair "$AWS_KEY_PAIR_NAME" \
   --instances $NUMBER_OF_INSTANCES_TO_ORDER \
   --expiry "$AWS_LEASE_EXPIRATION" \
   --password-file "$CONDOR_PASSWORD_FILE" \
   --image-ids "$AWS_AMI_ID" \
   --instance-types "$AWS_INSTANCE_TYPE" \
   --spot-prices $AWS_SPOT_BID \
   --config-file $AWS_USER_CONFIG"

Additional Documentation

  • pi.condor: A sample HTCondor submit description file

  • pi.sh: A bash script that estimates the value of Pi via the Monte Carlo method.
  • bash_pi.condor: A sample HTCondor submit description file

  • bash_pi.sh: A bash script uses a simple Monte Carlo method to estimate the value of Pi
Topic attachments
I Attachment Action Size Date Who Comment
elsecondor bash_pi.condor manage 0.5 K 2017/01/11 - 01:01 MartinKandes A sample HTCondor submit description file
shsh bash_pi.sh manage 1.7 K 2017/01/11 - 01:01 MartinKandes A bash script uses a simple Monte Carlo method to estimate the value of Pi
Edit | Attach | Print version | History: r19 < r18 < r17 < r16 < r15 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r16 - 2017/01/12 - 23:17:21 - MartinKandes
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback