Option(s) define multiple jobs in a co-scheduled heterogeneous job.
For more details about heterogeneous jobs see the document
.br
https://slurm.schedmd.com/heterogeneous_jobs.html

.SH "DESCRIPTION"
Run a parallel job on cluster managed by Slurm.  If necessary, srun will
first create a resource allocation in which to run the parallel job.

The following document describes the influence of various options on the
allocation of cpus to jobs and tasks.
.br
https://slurm.schedmd.com/cpu_management.html

.SH "RETURN VALUE"
srun will return the highest exit code of all tasks run or the highest signal
(with the high-order bit set in an 8-bit integer -- e.g. 128 + signal) of any
task that exited with a signal.
.br
The value 253 is reserved for out-of-memory errors.

.SH "EXECUTABLE PATH RESOLUTION"

The executable is resolved in the following order:
.br

1. If executable starts with ".", then path is constructed as:
current working directory / executable
.br
2. If executable starts with a "/", then path is considered absolute.
.br
3. If executable can be resolved through PATH. See \fBpath_resolution\fR(7).
.br
4. If executable is in current working directory.
.br
.P
Current working directory is the calling process working directory unless the
\fB\-\-chdir\fR argument is passed, which will override the current working
directory.

.SH "OPTIONS"
.LP

.TP
\fB\-\-accel\-bind\fR=<\fIoptions\fR>
Control how tasks are bound to generic resources of type gpu, mic and nic.
Multiple options may be specified. Supported options include:
.RS
.TP
\fBg\fR
Bind each task to GPUs which are closest to the allocated CPUs.
\fB\-A\fR, \fB\-\-account\fR=<\fIaccount\fR>
Charge resources used by this job to specified account.
The \fIaccount\fR is an arbitrary string. The account name may
be changed after job submission using the \fBscontrol\fR
command. This option applies to job allocations.

.TP
\fB\-\-acctg\-freq\fR
Define the job accounting and profiling sampling intervals.
This can be used to override the \fIJobAcctGatherFrequency\fR parameter in Slurm's
configuration file, \fIslurm.conf\fR.
The supported format is follows:
.RS
.TP 12
\fB\-\-acctg\-freq=\fR\fI<datatype>\fR\fB=\fR\fI<interval>\fR
where \fI<datatype>\fR=\fI<interval>\fR specifies the task sampling
interval for the jobacct_gather plugin or a
sampling interval for a profiling type by the
acct_gather_profile plugin. Multiple,
comma-separated \fI<datatype>\fR=\fI<interval>\fR intervals
may be specified. Supported datatypes are as follows:
.RS
.TP
\fBtask=\fI<interval>\fR
where \fI<interval>\fR is the task sampling interval in seconds
for the jobacct_gather plugins and for task
profiling by the acct_gather_profile plugin.
NOTE: This frequency is used to monitor memory usage. If memory limits
are enforced the highest frequency a user can request is what is configured in
the slurm.conf file.  They can not turn it off (=0) either.
.TP
\fBenergy=\fI<interval>\fR
where \fI<interval>\fR is the sampling interval in seconds
for energy profiling using the acct_gather_energy plugin
.TP
\fBnetwork=\fI<interval>\fR
where \fI<interval>\fR is the sampling interval in seconds
for infiniband profiling using the acct_gather_interconnect
plugin.
.TP
\fBfilesystem=\fI<interval>\fR
where \fI<interval>\fR is the sampling interval in seconds
for filesystem profiling using the acct_gather_filesystem
plugin.
.TP
.RE
.RE
.br
The default value for the task sampling interval
is 30. The default value for all other intervals is 0.
An interval of 0 disables sampling of the specified type.
If the task sampling interval is 0, accounting
Each value specified is considered a minimum.
An asterisk (*) can be used as a placeholder indicating that all available
resources of that type are to be utilized. Values can also be specified as
min-max. The individual levels can also be specified in separate options if
desired:
.nf
    \fB\-\-sockets\-per\-node\fR=<\fIsockets\fR>
    \fB\-\-cores\-per\-socket\fR=<\fIcores\fR>
    \fB\-\-threads\-per\-core\fR=<\fIthreads\fR>
.fi
If task/affinity plugin is enabled, then specifying an allocation in this
manner also sets a default \fB\-\-cpu\-bind\fR option of \fIthreads\fR
if the \fB\-B\fR option specifies a thread count, otherwise an option of
\fIcores\fR if a core count is specified, otherwise an option of \fIsockets\fR.
If SelectType is configured to select/cons_res, it must have a parameter of
CR_Core, CR_Core_Memory, CR_Socket, or CR_Socket_Memory for this option
to be honored.
If not specified, the scontrol show job will display 'ReqS:C:T=*:*:*'. This
option applies to job allocations.

.TP
\fB\-\-bb\fR=<\fIspec\fR>
Burst buffer specification. The form of the specification is system dependent.
Also see \fB\-\-bbf\fR. This option applies to job allocations.

.TP
\fB\-\-bbf\fR=<\fIfile_name\fR>
Path of file containing burst buffer specification.
The form of the specification is system dependent.
Also see \fB\-\-bb\fR. This option applies to job allocations.

.TP
\fB\-\-bcast\fR[=<\fIdest_path\fR>]
Copy executable file to allocated compute nodes.
If a file name is specified, copy the executable to the specified destination
file path. If no path is specified, copy the file to a file named
"slurm_bcast_<job_id>.<step_id>" in the current working.
For example, "srun \-\-bcast=/tmp/mine \-N3 a.out" will copy the file "a.out"
from your current directory to the file "/tmp/mine" on each of the three
allocated compute nodes and execute that file. This option applies to step
allocations.

.TP
\fB\-b\fR, \fB\-\-begin\fR=<\fItime\fR>
Defer initiation of this job until the specified time.
It accepts times of the form \fIHH:MM:SS\fR to run a job at
a specific time of day (seconds are optional).
(If that time is already past, the next day is assumed.)
You may also specify \fImidnight\fR, \fInoon\fR, \fIfika\fR (3 PM) or
\fIteatime\fR (4 PM) and you can have a time\-of\-day suffixed
with \fIAM\fR or \fIPM\fR for running in the morning or the evening.
You can also say what day the job will be run, by specifying
   \-\-begin=now+60           (seconds by default)
   \-\-begin=2010\-01\-20T12:34:00
.fi

.RS
.PP
Notes on date/time specifications:
 \- Although the 'seconds' field of the HH:MM:SS time specification is
allowed by the code, note that the poll time of the Slurm scheduler
is not precise enough to guarantee dispatch of the job on the exact
second.  The job will be eligible to start on the next poll
following the specified time. The exact poll interval depends on the
Slurm scheduler (e.g., 60 seconds with the default sched/builtin).
 \- If no time (HH:MM:SS) is specified, the default is (00:00:00).
 \- If a date is specified without a year (e.g., MM/DD) then the current
year is assumed, unless the combination of MM/DD and HH:MM:SS has
already passed for that year, in which case the next year is used.
.br
This option applies to job allocations.
.RE

.TP
\fB\-\-cluster\-constraint\fR=<\fIlist\fR>
Specifies features that a federated cluster must have to have a sibling job
submitted to it. Slurm will attempt to submit a sibling job to a cluster if it
has at least one of the specified features.

.TP
\fB\-\-comment\fR=<\fIstring\fR>
An arbitrary comment. This option applies to job allocations.

.TP
\fB\-\-compress\fR[=\fItype\fR]
Compress file before sending it to compute hosts.
The optional argument specifies the data compression library to be used.
Supported values are "lz4" (default) and "zlib".
Some compression libraries may be unavailable on some systems.
For use with the \fB\-\-bcast\fR option. This option applies to step
allocations.

.TP
\fB\-C\fR, \fB\-\-constraint\fR=<\fIlist\fR>
Nodes can have \fBfeatures\fR assigned to them by the Slurm administrator.
Users can specify which of these \fBfeatures\fR are required by their job
using the constraint option.
Only nodes having features matching the job constraints will be used to
satisfy the request.
Multiple constraints may be specified with AND, OR, matching OR,
resource counts, etc. (some operators are not supported on all system types).
Supported \fbconstraint\fR options include:
.PD 1
.RS
The ampersand is used for an AND operator.
For example, \fB\-\-constraint="intel&gpu"\fR
.TP
\fBOR\fR
If only nodes with at least one of specified features will be used.
The vertical bar is used for an OR operator.
For example, \fB\-\-constraint="intel|amd"\fR
.TP
\fBMatching OR\fR
If only one of a set of possible options should be used for all allocated
nodes, then use the OR operator and enclose the options within square brackets.
For example: "\fB\-\-constraint=[rack1|rack2|rack3|rack4]"\fR might
be used to specify that all nodes must be allocated on a single rack of
the cluster, but any of those four racks can be used.
.TP
\fBMultiple Counts\fR
Specific counts of multiple resources may be specified by using the AND
operator and enclosing the options within square brackets.
For example: "\fB\-\-constraint=[rack1*2&rack2*4]"\fR might
be used to specify that two nodes must be allocated from nodes with the feature
of "rack1" and four nodes must be allocated from nodes with the feature
"rack2".

\fBNOTE:\fR This construct does not support multiple Intel KNL NUMA or MCDRAM
modes. For example, while "\fB\-\-constraint=[(knl&quad)*2&(knl&hemi)*4]"\fR is
not supported, "\fB\-\-constraint=[haswell*2&(knl&hemi)*4]"\fR is supported.
Specification of multiple KNL modes requires the use of a heterogeneous job.

.TP
\fBParenthesis\fR
Parenthesis can be used to group like node features together. For example
"\fB\-\-constraint=[(knl&snc4&flat)*4&haswell*1]"\fR might be used to specify
that four nodes with the features "knl", "snc4" and "flat" plus one node with
the feature "haswell" are required. All options within parenthesis should be
grouped with AND (e.g. "&") operands.
.RE

\fBWARNING\fR: When srun is executed from within salloc or sbatch,
the constraint value can only contain a single feature name. None of the
other operators are currently supported for job steps.
.br
This option applies to job and step allocations.

.TP
\fB\-\-contiguous\fR
If set, then the allocated nodes must form a contiguous set.
Not honored with the \fBtopology/tree\fR or \fBtopology/3d_torus\fR
plugins, both of which can modify the node ordering. This option applies to job
allocations.

.TP
\fB\-\-cores\-per\-socket\fR=<\fIcores\fR>
is in use:
.nf
	SLURM_CPU_BIND_VERBOSE
	SLURM_CPU_BIND_TYPE
	SLURM_CPU_BIND_LIST
.fi

See the \fBENVIRONMENT VARIABLES\fR section for a more detailed description
of the individual SLURM_CPU_BIND variables. These variable are available
only if the task/affinity plugin is configured.

When using \fB\-\-cpus\-per\-task\fR to run multithreaded tasks, be aware that
CPU binding is inherited from the parent of the process.  This means that
the multithreaded task should either specify or clear the CPU binding
itself to avoid having all threads of the multithreaded task use the same
mask/CPU as the parent.  Alternatively, fat masks (masks which specify more
than one allowed CPU) could be used for the tasks in order to provide
multiple CPUs for the multithreaded tasks.

By default, a job step has access to every CPU allocated to the job.
To ensure that distinct CPUs are allocated to each job step, use the
\fB\-\-exclusive\fR option.

Note that a job step can be allocated different numbers of CPUs on each node
or be allocated CPUs not starting at location zero. Therefore one of the
options which automatically generate the task binding is recommended.
Explicitly specified masks or bindings are only honored when the job step
has been allocated every available CPU on the node.

Binding a task to a NUMA locality domain means to bind the task to the set of
CPUs that belong to the NUMA locality domain or "NUMA node".
If NUMA locality domain options are used on systems with no NUMA support, then
each socket is considered a locality domain.

If the -\-cpu\-bind option is not used, the default binding mode will depend
upon Slurm's configuration and the step's resource allocation.
If all allocated nodes have the same configured CpuBind mode, that will be used.
Otherwise if the job's Partition has a configured CpuBind mode, that will be used.
Otherwise if Slurm has a configured TaskPluginParam value, that mode will be used.
Otherwise automatic binding will be performed as described below.

.PD
.RS
.TP
\fBAuto Binding\fR
Applies only when task/affinity is enabled. If the job step allocation includes an
allocation with a number of
sockets, cores, or threads equal to the number of tasks times cpus\-per\-task,
then the tasks will by default be bound to the appropriate resources (auto
binding). Disable this mode of operation by explicitly setting
"-\-cpu\-bind=none". Use TaskPluginParam=autobind=[threads|cores|sockets] to set
a default cpu binding in case "auto binding" doesn't find a match.
Do not bind tasks to CPUs (default unless auto binding is applied)
.TP
.B rank
Automatically bind by task rank.
The lowest numbered task on each node is bound to socket (or core or thread) zero, etc.
Not supported unless the entire node is allocated to the job.
.TP
.B map_cpu:<list>
Bind by setting CPU masks on tasks (or ranks) as specified where <list> is
setbuf(3). If this option is specified the tasks are executed with
a pseudo terminal so that the application output is unbuffered. This option
applies to step allocations.
.TP
\fB\-\-usage\fR
Display brief help message and exit.

.TP
\fB\-\-uid\fR=<\fIuser\fR>
Attempt to submit and/or run a job as \fIuser\fR instead of the
invoking user id. The invoking user's credentials will be used
to check access permissions for the target partition. User root
may use this option to run jobs as a normal user in a RootOnly
partition for example. If run as root, \fBsrun\fR will drop
its permissions to the uid specified after node allocation is
successful. \fIuser\fR may be the user name or numerical user ID. This option
applies to job and step allocations.

.TP
\fB\-\-use-min-nodes\fR
If a range of node counts is given, prefer the smaller count.

.TP
\fB\-V\fR, \fB\-\-version\fR
Display version information and exit.

.TP
\fB\-v\fR, \fB\-\-verbose\fR
Increase the verbosity of srun's informational messages.  Multiple
\fB\-v\fR's will further increase srun's verbosity.  By default only
errors will be displayed. This option applies to job and step allocations.

.TP
\fB\-W\fR, \fB\-\-wait\fR=<\fIseconds\fR>
Specify how long to wait after the first task terminates before terminating
all remaining tasks. A value of 0 indicates an unlimited wait (a warning will
be issued after 60 seconds). The default value is set by the WaitTime
parameter in the slurm configuration file (see \fBslurm.conf(5)\fR). This
If you specify a minimum node or processor count larger than can be satisfied
by the supplied host list, additional resources will be allocated on other
nodes as needed.
Rather than repeating a host name multiple times, an asterisk and
a repetition count may be appended to a host name. For example
"host1,host1" and "host1*2" are equivalent. If number of tasks is given and a
list of requested nodes is also given the number of nodes used from that list
will be reduced to match that of the number of tasks if the number of nodes in
the list is greater than the number of tasks. This option applies to job and
step allocations.

.TP
\fB\-\-wckey\fR=<\fIwckey\fR>
Specify wckey to be used with job.  If TrackWCKey=no (default) in the
slurm.conf this value is ignored. This option applies to job allocations.

.TP
\fB\-X\fR, \fB\-\-disable\-status\fR
Disable the display of task status when srun receives a single SIGINT
(Ctrl\-C). Instead immediately forward the SIGINT to the running job.
Without this option a second Ctrl\-C in one second is required to forcibly
terminate the job and \fBsrun\fR will immediately exit. May also be
set via the environment variable SLURM_DISABLE_STATUS. This option applies to
job allocations.

.TP
\fB\-x\fR, \fB\-\-exclude\fR=<\fIhost1,host2,...\fR or \fIfilename\fR>
Request that a specific list of hosts not be included in the resources
allocated to this job. The host list will be assumed to be a filename
if it contains a "/"character. This option applies to job allocations.

.TP
\fB\-\-x11\fR[=<\fIall\fR|\fIfirst\fR|\fIlast\fR>]
Sets up X11 forwarding on all, first or last node(s) of the allocation. This
option is only enabled if Slurm was compiled with X11 support and
PrologFlags=x11 is defined in the slurm.conf. Default is \fIall\fR.

.TP
\fB\-Z\fR, \fB\-\-no\-allocate\fR
Run the specified tasks on a set of nodes without creating a Slurm
"job" in the Slurm queue structure, bypassing the normal resource
allocation step.  The list of nodes must be specified with the
\fB\-w\fR, \fB\-\-nodelist\fR option.  This is a privileged option
only available for the users "SlurmUser" and "root". This option applies to job
allocations.

.PP
.B srun
will submit the job request to the slurm job controller, then initiate all
processes on the remote nodes. If the request cannot be met immediately,
.B srun
will block until the resources are free to run the job. If the
more than one CPU may be allocated per process. If the number of nodes
is specified with \fB\-N\fR,
.B srun
will attempt to allocate \fIat least\fR the number of nodes specified.
.PP
Combinations of the above three options may be used to change how
processes are distributed across nodes and cpus. For instance, by specifying
both the number of processes and number of nodes on which to run, the
number of processes per node is implied. However, if the number of CPUs
per process is more important then number of processes (\fB\-n\fR) and the
number of CPUs per process (\fB\-c\fR) should be specified.
.PP
.B srun
will refuse to  allocate more than one process per CPU unless
\fB\-\-overcommit\fR (\fB\-O\fR) is also specified.
.PP
.B srun
will attempt to meet the above specifications "at a minimum." That is,
if 16 nodes are requested for 32 processes, and some nodes do not have
2 CPUs, the allocation of nodes will be increased in order to meet the
demand for CPUs. In other words, a \fIminimum\fR of 16 nodes are being
requested. However, if 16 nodes are requested for 15 processes,
.B srun
will consider this an error, as 15 processes cannot run across 16 nodes.

.PP
.B "IO Redirection"
.PP
By default, stdout and stderr will be redirected from all tasks to the
stdout and stderr of \fBsrun\fR, and stdin will be redirected from the
standard input of \fBsrun\fR to all remote tasks.
If stdin is only to be read by a subset of the spawned tasks, specifying a
file to read from rather than forwarding stdin from the \fBsrun\fR command may
be preferable as it avoids moving and storing data that will never be read.
.PP
For OS X, the poll() function does not support stdin, so input from
a terminal is not possible.
.PP
This behavior may be changed with the
\fB\-\-output\fR, \fB\-\-error\fR, and \fB\-\-input\fR
(\fB\-o\fR, \fB\-e\fR, \fB\-i\fR) options. Valid format specifications
for these options are
.TP 10
\fBall\fR
stdout stderr is redirected from all tasks to srun.
stdin is broadcast to all remote tasks.
(This is the default behavior)
.TP
\fBnone\fR
stdout and stderr is not received from any task.
stdin is not sent to any task (stdin is closed).
.TP
on whether the job is run in batch mode.
.TP
\fBfilename pattern\fR
\fBsrun\fR allows for a filename pattern to be used to generate the
named IO file
described above. The following list of format specifiers may be
used in the format string to generate a filename that will be
unique to a given jobid, stepid, node, or task. In each case,
the appropriate number of files are opened and associated with
the corresponding tasks. Note that any format string containing
%t, %n, and/or %N will be written on the node executing the task
rather than the node where \fBsrun\fR executes, these format specifiers
are not supported on a BGQ system.
.RS 10
.TP
\fB\\\\\fR
Do not process any of the replacement symbols.
.TP
\fB%%\fR
The character "%".
.TP
\fB%A\fR
Job array's master job allocation number.
.TP
\fB%a\fR
Job array ID (index) number.
.TP
\fB%J\fR
jobid.stepid of the running job. (e.g. "128.0")
.TP
\fB%j\fR
jobid of the running job.
.TP
\fB%s\fR
stepid of the running job.
.TP
\fB%N\fR
short hostname. This will create a separate IO file per node.
.TP
\fB%n\fR
Node identifier relative to current job (e.g. "0" is the first node of
the running job) This will create a separate IO file per node.
.TP
\fB%t\fR
task identifier (rank) relative to current job. This will create a
separate IO file per task.
.TP
\fB%u\fR
User name.
.TP
\fB%x\fR
Job name.
job%j\-%2t.out
job128\-00.out, job128\-01.out, ...
.PP
.RS -10
.PP

.SH "PERFORMANCE"
.PP
Executing \fBsrun\fR sends a remote procedure call to \fBslurmctld\fR. If
enough calls from \fBsrun\fR or other Slurm client commands that send remote
procedure calls to the \fBslurmctld\fR daemon come in at once, it can result in
a degradation of performance of the \fBslurmctld\fR daemon, possibly resulting
in a denial of service.
.PP
Do not run \fBsrun\fR or other Slurm client commands that send remote procedure
calls to \fBslurmctld\fR from loops in shell scripts or other programs. Ensure
that programs limit calls to \fBsrun\fR to the minimum necessary for the
information you are trying to gather.

.SH "INPUT ENVIRONMENT VARIABLES"
.PP
Some srun options may be set via environment variables.
These environment variables, along with their corresponding options,
are listed below.
Note: Command line options will always override these settings.
.TP 22
\fBPMI_FANOUT\fR
This is used exclusively with PMI (MPICH2 and MVAPICH2) and
controls the fanout of data communications. The srun command
sends messages to application programs (via the PMI library)
and those applications may be called upon to forward that
data to up to this number of additional tasks. Higher values
offload work from the srun command to the applications and
likely increase the vulnerability to failures.
The default value is 32.
.TP
\fBPMI_FANOUT_OFF_HOST\fR
This is used exclusively with PMI (MPICH2 and MVAPICH2) and
controls the fanout of data communications.  The srun command
sends messages to application programs (via the PMI library)
and those applications may be called upon to forward that
data to additional tasks. By default, srun sends one message
per host and one task on that host forwards the data to other
tasks on that host up to \fBPMI_FANOUT\fR.
If \fBPMI_FANOUT_OFF_HOST\fR is defined, the user task
may be required to forward the data to tasks on other hosts.
Setting \fBPMI_FANOUT_OFF_HOST\fR may increase performance.
Since more work is performed by the PMI library loaded by
the user application, failures also can be more common and
more difficult to diagnose.
.TP
\fBPMI_TIME\fR
\fBSLURM_ACCTG_FREQ\fR
Same as \fB\-\-acctg\-freq\fR
.TP
\fBSLURM_BCAST\fR
Same as \fB\-\-bcast\fR
.TP
\fBSLURM_BURST_BUFFER\fR
Same as \fB\-\-bb\fR
.TP
\fBSLURM_COMPRESS\fR
Same as \fB\-\-compress\fR
.TP
\fBSLURM_CONSTRAINT\fR
Same as \fB\-C\fR, \fB\-\-constraint\fR
.TP
\fBSLURM_CORE_SPEC\fR
Same as \fB\-\-core\-spec\fR
.TP
\fBSLURM_CPU_BIND\fR
Same as \fB\-\-cpu\-bind\fR
.TP
\fBSLURM_CPU_FREQ_REQ\fR
Same as \fB\-\-cpu\-freq\fR.
.TP
\fBSLURM_CPUS_PER_GPU\fR
Same as \fB\-\-cpus\-per\-gpu\fR
.TP
\fBSLURM_CPUS_PER_TASK\fR
Same as \fB\-c, \-\-cpus\-per\-task\fR
.TP
\fBSLURM_DEBUG\fR
Same as \fB\-v, \-\-verbose\fR
.TP
\fBSLURM_DELAY_BOOT\fR
Same as \fB\-\-delay\-boot\fR
.TP
\fBSLURMD_DEBUG\fR
Same as \fB\-d, \-\-slurmd\-debug\fR
.TP
\fBSLURM_DEPENDENCY\fR
Same as \fB\-P, \-\-dependency\fR=<\fIjobid\fR>
.TP
\fBSLURM_DISABLE_STATUS\fR
Same as \fB\-X, \-\-disable\-status\fR
.TP
\fBSLURM_DIST_PLANESIZE\fR
Same as \fB\-m plane\fR
.TP
\fBSLURM_DISTRIBUTION\fR
Same as \fB\-m, \-\-distribution\fR
.TP
\fBSLURM_EPILOG\fR
is used and resources are not currently available.
This can be used by a script to distinguish application exit codes from
various Slurm error conditions.
Also see \fBSLURM_EXIT_ERROR\fR.
.TP
\fBSLURM_EXPORT_ENV\fR
Same as \-\-export\fR
.TP
\fBSLURM_GPUS\fR
Same as \fB\-G, \-\-gpus\fR
.TP
\fBSLURM_GPU_BIND\fR
Same as \fB\-\-gpu\-bind\fR
.TP
\fBSLURM_GPU_FREQ\fR
Same as \fB\-\-gpu\-freq\fR
.TP
\fBSLURM_GPUS_PER_NODE\fR
Same as \fB\-\-gpus\-per\-node\fR
.TP
\fBSLURM_GPUS_PER_TASK\fR
Same as \fB\-\-gpus\-per\-task\fR
.TP
\fBSLURM_GRES_FLAGS\fR
Same as \-\-gres\-flags\fR
.TP
\fBSLURM_HINT\fR
Same as \fB\-\-hint\fR
.TP
\fBSLURM_GRES\fR
Same as \fB\-\-gres\fR. Also see \fBSLURM_STEP_GRES\fR
.TP
\fBSLURM_IMMEDIATE\fR
Same as \fB\-I, \-\-immediate\fR
.TP
\fBSLURM_JOB_ID\fR
Same as \fB\-\-jobid\fR
.TP
\fBSLURM_JOB_NAME\fR
Same as \fB\-J, \-\-job\-name\fR except within an existing
allocation, in which case it is ignored to avoid using the batch job's name
as the name of each job step.
.TP
\fBSLURM_JOB_NODELIST\fR
Same as \fB\-w\fR, \fB\-\-nodelist\fR=<\fIhost1,host2,...\fR or
\fIfilename\fR>. If job has been resized, ensure that this nodelist is adjusted
(or undefined) to avoid jobs steps being rejected due to down nodes.
.TP
\fBSLURM_JOB_NUM_NODES\fR (and \fBSLURM_NNODES\fR for backwards compatibility)
Same as \fB\-N, \-\-nodes\fR
Total number of nodes in the job’s resource allocation.
.TP
.TP
\fBSLURM_MEM_PER_NODE\fR
Same as \fB\-\-mem\fR
.TP
\fBSLURM_MPI_TYPE\fR
Same as \fB\-\-mpi\fR
.TP
\fBSLURM_NETWORK\fR
Same as \fB\-\-network\fR
.TP
\fBSLURM_NO_KILL\fR
Same as \fB\-k\fR, \fB\-\-no\-kill\fR
.TP
\fBSLURM_NTASKS\fR (and \fBSLURM_NPROCS\fR for backwards compatibility)
Same as \fB\-n, \-\-ntasks\fR
.TP
\fBSLURM_NTASKS_PER_CORE\fR
Same as \fB\-\-ntasks\-per\-core\fR
.TP
\fBSLURM_NTASKS_PER_NODE\fR
Same as \fB\-\-ntasks\-per\-node\fR
.TP
\fBSLURM_NTASKS_PER_SOCKET\fR
Same as \fB\-\-ntasks\-per\-socket\fR
.TP
\fBSLURM_OPEN_MODE\fR
Same as \fB\-\-open\-mode\fR
.TP
\fBSLURM_OVERCOMMIT\fR
Same as \fB\-O, \-\-overcommit\fR
.TP
\fBSLURM_PARTITION\fR
Same as \fB\-p, \-\-partition\fR
.TP
\fBSLURM_PMI_KVS_NO_DUP_KEYS\fR
If set, then PMI key\-pairs will contain no duplicate keys. MPI can use
this variable to inform the PMI library that it will not use duplicate
keys so PMI can skip the check for duplicate keys.
This is the case for MPICH2 and reduces overhead in testing for duplicates
for improved performance
.TP
\fBSLURM_POWER\fR
Same as \fB\-\-power\fR
.TP
\fBSLURM_PROFILE\fR
Same as \fB\-\-profile\fR
.TP
\fBSLURM_PROLOG\fR
Same as \fB\-\-prolog\fR
.TP
\fBSLURM_QOS\fR
Same as \fB\-\-qos\fR
.TP
\fBSLURM_SIGNAL\fR
Same as \fB\-\-signal\fR
.TP
\fBSLURM_STDERRMODE\fR
Same as \fB\-e, \-\-error\fR
.TP
\fBSLURM_STDINMODE\fR
Same as \fB\-i, \-\-input\fR
.TP
\fBSLURM_SPREAD_JOB\fR
Same as \fB\-\-spread\-job\fR
.TP
\fBSLURM_SRUN_REDUCE_TASK_EXIT_MSG\fR
if set and non-zero, successive task exit messages with the same exit code will
be printed only once.
.TP
\fBSLURM_STEP_GRES\fR
Same as \fB\-\-gres\fR (only applies to job steps, not to job allocations).
Also see \fBSLURM_GRES\fR
.TP
\fBSLURM_STEP_KILLED_MSG_NODE_ID\fR=ID
If set, only the specified node will log when the job or step are killed
by a signal.
.TP
\fBSLURM_STDOUTMODE\fR
Same as \fB\-o, \-\-output\fR
.TP
\fBSLURM_TASK_EPILOG\fR
Same as \fB\-\-task\-epilog\fR
.TP
\fBSLURM_TASK_PROLOG\fR
Same as \fB\-\-task\-prolog
.TP
\fBSLURM_TEST_EXEC\fR
If defined, srun will verify existence of the executable program along with user
execute permission on the node where srun was called before attempting to
launch it on nodes in the step.
.TP
\fBSLURM_THREAD_SPEC\fR
Same as \fB\-\-thread\-spec\fR
.TP
\fBSLURM_THREADS\fR
Same as \fB\-T, \-\-threads\fR
.TP
\fBSLURM_TIMELIMIT\fR
Same as \fB\-t, \-\-time\fR
.TP
\fBSLURM_UNBUFFEREDIO\fR
Same as \fB\-u, \-\-unbuffered\fR
.TP
\fBSLURM_USE_MIN_NODES\fR
\fBSRUN_EXPORT_ENV\fR
Same as \-\-export\fR, and will override any setting for \fBSLURM_EXPORT_ENV\fR.


.SH "OUTPUT ENVIRONMENT VARIABLES"
.PP
srun will set some environment variables in the environment
of the executing tasks on the remote compute nodes.
These environment variables are:

.TP 22
\fBSLURM_*_HET_GROUP_#\fR
For a heterogeneous job allocation, the environment variables are set separately
for each component.
.TP
\fBSLURM_CLUSTER_NAME\fR
Name of the cluster on which the job is executing.
.TP
\fBSLURM_CPU_BIND_VERBOSE\fR
\-\-cpu\-bind verbosity (quiet,verbose).
.TP
\fBSLURM_CPU_BIND_TYPE\fR
\-\-cpu\-bind type (none,rank,map_cpu:,mask_cpu:).
.TP
\fBSLURM_CPU_BIND_LIST\fR
\-\-cpu\-bind map or mask list (list of Slurm CPU IDs or masks for this node,
CPU_ID = Board_ID x threads_per_board +
Socket_ID x threads_per_socket +
Core_ID x threads_per_core + Thread_ID).

.TP
\fBSLURM_CPU_FREQ_REQ\fR
Contains the value requested for cpu frequency on the srun command as
a numerical frequency in kilohertz, or a coded value for a request of
\fIlow\fR, \fImedium\fR,\fIhighm1\fR or \fIhigh\fR for the frequency.
See the description of the \fB\-\-cpu\-freq\fR option or the
\fBSLURM_CPU_FREQ_REQ\fR input environment variable.
.TP
\fBSLURM_CPUS_ON_NODE\fR
Count of processors available to the job on this node.
Note the select/linear plugin allocates entire nodes to
jobs, so the value indicates the total count of CPUs on the node.
For the select/cons_res plugin, this number indicates the number of cores
on this node allocated to the job.
.TP
\fBSLURM_CPUS_PER_GPU\fR
Number of CPUs requested per allocated GPU.
Only set if the \fB\-\-cpus\-per\-gpu\fR option is specified.
.TP
\fBSLURM_CPUS_PER_TASK\fR
Number of cpus requested per task.
Only set if the \fB\-\-cpus\-per\-task\fR option is specified.
Requested GPU frequency.
Only set if the \fB\-\-gpu\-freq\fR option is specified.
.TP
\fBSLURM_GPUS_PER_NODE\fR
Requested GPU count per allocated node.
Only set if the \fB\-\-gpus\-per\-node\fR option is specified.
.TP
\fBSLURM_GPUS_PER_SOCKET\fR
Requested GPU count per allocated socket.
Only set if the \fB\-\-gpus\-per\-socket\fR option is specified.
.TP
\fBSLURM_GPUS_PER_TASK\fR
Requested GPU count per allocated task.
Only set if the \fB\-\-gpus\-per\-task\fR option is specified.
.TP
\fBSLURM_GTIDS\fR
Global task IDs running on this node.
Zero origin and comma separated.
.TP
\fBSLURM_JOB_ACCOUNT\fR
Account name associated of the job allocation.
.TP
\fBSLURM_JOB_CPUS_PER_NODE\fR
Number of CPUS per node.
.TP
\fBSLURM_JOB_DEPENDENCY\fR
Set to value of the \-\-dependency option.
.TP
\fBSLURM_JOB_ID\fR (and \fBSLURM_JOBID\fR for backwards compatibility)
Job id of the executing job.

.TP
\fBSLURM_JOB_NAME\fR
Set to the value of the \-\-job\-name option or the command name when srun
is used to create a new job allocation. Not set when srun is used only to
create a job step (i.e. within an existing job allocation).

.TP
\fBSLURM_JOB_PARTITION\fR
Name of the partition in which the job is running.

.TP
\fBSLURM_JOB_QOS\fR
Quality Of Service (QOS) of the job allocation.
.TP
\fBSLURM_JOB_RESERVATION\fR
Advanced reservation containing the job allocation, if any.

.TP
\fBSLURM_LAUNCH_NODE_IPADDR\fR
IP address of the node from which the task launch was
initiated (where the srun command ran from).
\fBSLURM_MEM_BIND_TYPE\fR
\-\-mem\-bind type (none,rank,map_mem:,mask_mem:).
.TP
\fBSLURM_MEM_BIND_VERBOSE\fR
\-\-mem\-bind verbosity (quiet,verbose).
.TP
\fBSLURM_MEM_PER_GPU\fR
Requested memory per allocated GPU.
Only set if the \fB\-\-mem\-per\-gpu\fR option is specified.
.TP
\fBSLURM_JOB_NODES\fR
Total number of nodes in the job's resource allocation.
.TP
\fBSLURM_NODE_ALIASES\fR
Sets of node name, communication address and hostname for nodes allocated to
the job from the cloud. Each element in the set if colon separated and each
set is comma separated. For example:
.na
SLURM_NODE_ALIASES\:=\:ec0:1.2.3.4:foo,ec1:1.2.3.5:bar
.ad
.TP
\fBSLURM_NODEID\fR
The relative node ID of the current node.
.TP
\fBSLURM_JOB_NODELIST\fR
List of nodes allocated to the job.
.TP
\fBSLURM_NTASKS\fR (and \fBSLURM_NPROCS\fR for backwards compatibility)
Total number of processes in the current job or job step.
.TP
\fBSLURM_HET_SIZE\fR
Set to count of components in heterogeneous job.
.TP
\fBSLURM_PRIO_PROCESS\fR
The scheduling priority (nice value) at the time of job submission.
This value is propagated to the spawned processes.
.TP
\fBSLURM_PROCID\fR
The MPI rank (or relative process ID) of the current process.
.TP
\fBSLURM_SRUN_COMM_HOST\fR
IP address of srun communication host.
.TP
\fBSLURM_SRUN_COMM_PORT\fR
srun communication port.
.TP
\fBSLURM_STEP_LAUNCHER_PORT\fR
Step launcher port.
.TP
\fBSLURM_STEP_NODELIST\fR
List of nodes allocated to the step.
.TP
directory specified by the \fB\-D, \-\-chdir\fR option.
.TP
\fBSLURM_SUBMIT_HOST\fR
The hostname of the computer from which \fBsalloc\fR was invoked.
.TP
\fBSLURM_TASK_PID\fR
The process ID of the task being started.
.TP
\fBSLURM_TASKS_PER_NODE\fR
Number of tasks to be initiated on each node. Values are
comma separated and in the same order as SLURM_JOB_NODELIST.
If two or more consecutive nodes are to have the same task
count, that count is followed by "(x#)" where "#" is the
repetition count. For example, "SLURM_TASKS_PER_NODE=2(x3),1"
indicates that the first three nodes will each execute two
tasks and the fourth node will execute one task.

.TP
\fBSLURM_TOPOLOGY_ADDR\fR
This is set only if the system has the topology/tree plugin configured.
The value will be set to the names network switches which may be involved in
the job's communications from the system's top level switch down to the leaf
switch and ending with node name. A period is used to separate each hardware
component name.
.TP
\fBSLURM_TOPOLOGY_ADDR_PATTERN\fR
This is set only if the system has the topology/tree plugin configured.
The value will be set component types listed in \fBSLURM_TOPOLOGY_ADDR\fR.
Each component will be identified as either "switch" or "node".
A period is used to separate each hardware component type.
.TP
\fBSLURM_UMASK\fR
The \fIumask\fR in effect when the job was submitted.
.TP
\fBSLURMD_NODENAME\fR
Name of the node running the task. In the case of a parallel job executing on
multiple compute nodes, the various tasks will have this environment variable
set to different values on each compute node.
.TP
\fBSRUN_DEBUG\fR
Set to the logging level of the \fBsrun\fR command.
Default value is 3 (info level).
The value is incremented or decremented based upon the \-\-verbose and
\-\-quiet options.

.SH "SIGNALS AND ESCAPE SEQUENCES"
Signals sent to the \fBsrun\fR command are automatically forwarded to
the tasks it is controlling with a few exceptions. The escape sequence
\fB<control\-c>\fR will report the state of all tasks associated with
the \fBsrun\fR command. If \fB<control\-c>\fR is entered twice within
one second, then the associated SIGINT signal will be sent to all tasks
and a termination sequence will be entered sending SIGCONT, SIGTERM,
1. Slurm directly launches the tasks and performs initialization
of communications through the PMI2 or PMIx APIs.
For example: "srun \-n16 a.out".

2. Slurm creates a resource allocation for the job and then
mpirun launches tasks using Slurm's infrastructure (OpenMPI).

3. Slurm creates a resource allocation for the job and then
mpirun launches tasks using some mechanism other than Slurm,
such as SSH or RSH.
These tasks are initiated outside of Slurm's monitoring
or control. Slurm's epilog should be configured to purge
these tasks when the job's allocation is relinquished,
or the use of pam_slurm_adopt is highly recommended.

See \fIhttps://slurm.schedmd.com/mpi_guide.html\fR
for more information on use of these various MPI implementation
with Slurm.

.SH "MULTIPLE PROGRAM CONFIGURATION"
Comments in the configuration file must have a "#" in column one.
The configuration file contains the following fields separated by white
space:
.TP
Task rank
One or more task ranks to use this configuration.
Multiple values may be comma separated.
Ranges may be indicated with two numbers separated with a '\-' with
the smaller number first (e.g. "0\-4" and not "4\-0").
To indicate all tasks not otherwise specified, specify a rank of '*' as the
last line of the file.
If an attempt is made to initiate a task for which no executable
program is defined, the following error message will be produced
"No executable program specified for this task".
.TP
Executable
The name of the program to execute.
May be fully qualified pathname if desired.
.TP
Arguments
Program arguments.
The expression "%t" will be replaced with the task's number.
The expression "%o" will be replaced with the task's offset within
this range (e.g. a configured task rank value of "1\-5" would
have offset values of "0\-4").
Single quotes may be used to avoid having the enclosed values interpreted.
This field is optional.
Any arguments for the program entered on the command line will be added
to the arguments specified in the configuration file.
.PP
For example:
.nf
4: linux15.llnl.gov
5: linux16.llnl.gov
6: linux17.llnl.gov
7: task:7

.fi


.SH "EXAMPLES"
This simple example demonstrates the execution of the command \fBhostname\fR
in eight tasks. At least eight processors will be allocated to the job
(the same as the task count) on however many nodes are required to satisfy
the request. The output of each task will be proceeded with its task number.
(The machine "dev" in the example below has a total of two CPUs per node)

.nf

> srun \-n8 \-l hostname
0: dev0
1: dev0
2: dev1
3: dev1
4: dev2
5: dev2
6: dev3
7: dev3

.fi
.PP
The srun \fB\-r\fR option is used within a job script
to run two job steps on disjoint nodes in the following
example. The script is run using allocate mode instead
of as a batch job in this case.

.nf

> cat test.sh
#!/bin/sh
echo $SLURM_JOB_NODELIST
srun \-lN2 \-r2 hostname
srun \-lN2 hostname

> salloc \-N4 test.sh
dev[7\-10]
0: dev9
1: dev10
0: dev7
1: dev8

.fi
.PP
The following script runs two job steps in parallel
  JOBID PARTITION     NAME     USER  ST      TIME  NODES NODELIST
  65641     batch  test.sh   grondo   R      0:01      4 dev[7\-10]

STEPID     PARTITION     USER      TIME NODELIST
65641.0        batch   grondo      0:01 dev[7\-8]
65641.1        batch   grondo      0:01 dev[9\-10]

.fi
.PP
This example demonstrates how one executes a simple MPI job.
We use \fBsrun\fR to build a list of machines (nodes) to be used by
\fBmpirun\fR in its required format. A sample command line and
the script to be executed follow.

.nf

> cat test.sh
#!/bin/sh
MACHINEFILE="nodes.$SLURM_JOB_ID"

# Generate Machinefile for mpi such that hosts are in the same
#  order as if run via srun
#
srun \-l /bin/hostname | sort \-n | awk '{print $2}' > $MACHINEFILE

# Run using generated Machine file:
mpirun \-np $SLURM_NTASKS \-machinefile $MACHINEFILE mpi\-app

rm $MACHINEFILE

> salloc \-N2 \-n4 test.sh

.fi
.PP
This simple example demonstrates the execution of different jobs on different
nodes in the same srun.  You can do this for any number of nodes or any
number of jobs.  The executables are placed on the nodes sited by the
SLURM_NODEID env var.  Starting at 0 and going to the number specified on
the srun commandline.

.nf

> cat test.sh
case $SLURM_NODEID in
    0) echo "I am running on "
       hostname ;;
    1) hostname
       echo "is where I am running" ;;
esac

> srun \-N2 test.sh
dev0
.fi
.PP
This example shows a script in which Slurm is used to provide resource
management for a job by executing the various job steps as processors
become available for their dedicated use.

.nf

> cat my.script
#!/bin/bash
srun \-\-exclusive \-n4 prog1 &
srun \-\-exclusive \-n3 prog2 &
srun \-\-exclusive \-n1 prog3 &
srun \-\-exclusive \-n1 prog4 &
wait
.fi

.PP
This example shows how to launch an application called "master" with one task,
8 CPUs and and 16 GB of memory (2 GB per CPU) plus another application called
"slave" with 16 tasks, 1 CPU per task (the default) and 1 GB of memory per task.

.nf

> srun \-n1 \-c16 \-\-mem\-per\-cpu=1gb master : \-n16 \-\-mem\-per\-cpu=1gb slave
.fi

.SH "COPYING"
Copyright (C) 2006\-2007 The Regents of the University of California.
Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
.br
Copyright (C) 2008\-2010 Lawrence Livermore National Security.
.br
Copyright (C) 2010\-2015 SchedMD LLC.
.LP
This file is part of Slurm, a resource management program.
For details, see <https://slurm.schedmd.com/>.
.LP
Slurm is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 2 of the License, or (at your option)
any later version.
.LP
Slurm is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
details.

.SH "SEE ALSO"
\fBsalloc\fR(1), \fBsattach\fR(1), \fBsbatch\fR(1), \fBsbcast\fR(1),
\fBscancel\fR(1), \fBscontrol\fR(1), \fBsqueue\fR(1), \fBslurm.conf\fR(5),
\fBsched_setaffinity\fR (2), \fBnuma\fR (3)

Man(1) output converted with man2html