Unity Partitions

Overview

Partitions (previously exclusive nodes) are a way of selecting which part of the cluster jobs run on.

This is a major change to submitting jobs using the new scheduler (SLURM) if you have EXCLUSIVE nodes. 

If a unit or research group has purchased exclusive nodes, they will need to follow this procedure to access those nodes. Otherwise, their jobs will run on the shared infrastructure only, not taking advantage of the exclusive nodes. Failure to read this means you *may* be waiting longer on your jobs on the cluster.

Applicability

This affects PRIVATE/EXCLUSIVE nodes (such as department or research group nodes), thus following units and research groups:

  • Astronomy
  • Economics
  • Lingustics – de Marneffe
  • Linguistics – Schuler
  • Linguistics – White
  • Math - Xiu
  • Microbiology
  • Microbiology – Bradley
  • Microbiology – Sullivan
  • Microbiology – Wrighton
  • Molecular Genetics - Ou
  • Physics – Trivedi
  • Physics – Randaria
  • School of Earth Sciences
  • School of Earth Sciences – Gomez
  • School of Earth Sciences – Howat
  • School of Earth Sciences – Moortgat
  • Sociology
  • Statistics
  • Statistics - Maceachern-Peruggia

Partition names:

Public/shared:

  • batch – All jobs go here by default. Any shared nodes will be in this group. If you do not have any EXCLUSIVE machines in your department, research group or collaboration,  you run in the batch partition.

Exclusive:

  • ast: Astronomy, 2 nodes
  • bradley: 1 node
  • batch8: RHEL8 testbed OPEN to ALL and will be retired on 5/6/2024.
  • econ: 4 nodes
  • econ-macro: 2 nodes
  • demarneffe: 1 node of GPU for Linguistics demarneffe group shared
  • gomez: 4 nodes Exclusive ses-gomez group
  • howat: Any node of SES Howat’s group
  • howat-cascade: any Howat’s 10 nodes (newer)
  • howat-c18: 4 nodes of skylake same as OSC’s Cluster 18 (pitzer 1st nodes)
  • howat-ice: 4 nodes of Howat that used to be ice nodes
  • jouline: 1 node for Microbiology (MIB) Jouline
  • maceachern-peruggia: 2 nodes reserved for NSF grant for maceachern-peruggia group
  • mib: 2 BIG memory nodes that are for 1TB and 1.5TB nodes and reserved for Microbiology users
  • moortgat: 2 nodes exclusive to Moortgat group including a GPU (quad A40) node
  • ou: 1 node exclusive to the Ou group
  • schuler: Linguistics Schuler group
  • schuler-mit: 1 node reserved for Schuler MIT collaboration 
  • soc: 2 nodes reserved for the Sociology department
  • stat:  Any Department of Statistics user/staff/faculty on any stat node
  • stat-cascade: Any Department of Statistics user/staff/faculty on any stat cascade node
  • stat-skylake: Any Department of Statistics user/staff/faculty on any stat skylake node
  • stat-sapphire: Any Department of Statistics user/staff/faculty on any stat sapphire node (system is RHEL8 and will join stat partition when cluster is converted)
  • ses: 1 node for School of Earth Sciences
  • sullivan: 1 node for Sullivan group
  • trivedi: Any Physics Trivedi node in her research group
  • trivedi-broadwell: Any Physics broadwell Trivedi node in her research group
  • trivedi-cascade: Any Physics cascade Trivedi node in her research group (newer)
  • trivedi-sapphire: Any Physics sapphire Trivedi node in her research group (newest)
  • white-1gpu: 1node with 1 gpu for Linguistics White Group
  • white-2gpu: 3 nodes with 2-gpus for Linguistics White Group
  • wrighton: 1 node reserved for Wrighton group
  • xiu: 1 node exclusive to Xiu group

Usage

What does this mean for you?

  1. Add -p partition_name (to your job requests for sbatch, salloc or sinteractive. For the qsub compatibility command, the flag is -q (so add -q partition_name). Without a "-p" to indicate a partition, you will be schedule in the default batch partition. You may specific multiple partitions with a comma "-p partition_name1,partition_name2". Your job will run on the first available partition.
  2. Run jobs on your EXCLUSIVE nodes first then submit to batch partition for overflow.

If you do not get onto your partition that you believe you should be able to compute on please email asctech@osu.edu.

Example error: “Job submit/allocate failed: Invalid account or account/partition combination specified”

Details

Article ID: 130515
Created
Wed 3/17/21 2:43 PM
Modified
Wed 4/10/24 10:54 AM

Related Articles (1)

Some details such as core count, memory, GPU presence, and shared/exclusive status for each compute node in Unity.