Unity Partitions

Overview: Navigating Partitions on SLURM

This guide explains how to use partitions, which are groups of computing nodes, on the Unity's SLURM scheduler. For users with exclusive nodes, this is a crucial change. If your department or research group has purchased exclusive nodes, you must specify the correct partition to use them. Otherwise, your jobs will be submitted to the shared, 'batch' partition, which may result in longer wait times.

This information is relevant to the following units and research groups that have exclusive nodes:

  • Astronomy

  • Economics

  • Linguistics: Schuler, White

  • Mathematics: Ogle, Xiu

  • Microbiology: Bradley, Jouline, Sullivan

  • Molecular Genetics: Ou

  • Physics: Trivedi, Randaria

  • School of Earth Sciences: Gomez, Howat, Moortgat

  • Sociology

  • Statistics: Maceachern-Peruggia

Partition Naming and Structure:

Public/Shared Partitions

 

  • batch: This is the default partition for all jobs. Any shared nodes are part of this group. If your group does not have exclusive nodes, all of your jobs will run on the batch partition.

 

Exclusive Partitions

These partitions are reserved for specific departments or research groups. The names of these partitions are listed below, along with the number of nodes and any specific details.

 

Department Owned

Nodes are reserved for users in these departments:

  • ast: Astronomy (10 nodes)

  • econ:Economics (3 nodes; includes econ-metric partition)

    • econ-metrics: Economics for queuing of Econometrics specific jobs (1 node)

  • econ-macro: Economics Exclusive to ONLY Macro Econ users (2 nodes)

  • math: Math (1 node)

  • mib: Microbiology (2 large memory nodes, 1 TB and 1.5 TB)

  • ses: School of Earth Sciences (1 node)

  • soc: Sociology (1 nodes)

  • stat: Statistics. This department has several partitions for different node types (11 nodes):

    • stat-cascade (6 nodes)

    • stat-sapphire (1 node, newest)

    • stat-skylake (3 nodes, oldest)

 

Research group Owned

Nodes are reserved for users working with/in these research groups

  • bradley: Microbiology-Bradley (1 node)

  • gomez: School of Earth Sciences-Gomez (4 nodes)

  • howat: School of Earth Sciences-Howat. This group has multiple partitions:

    • howat-cascade: 10 cascade (newer) nodes

    • howat-c18: 4 Skylake nodes

    • howat-ice: 4  Icelake (oldest) nodes

    • howat-sapphire: 1 Sapphire node (newest) node 

  • jouline: Microbiology-Jouline (1 node)

  • maceachern-peruggia: Statistics (2 nodes for an NSF grant)

  • moortgat: School of Earth Sciences-Moortgat (4 nodes, including GPU nodes)

    • moortgat-gpu: School of Earth Sciences-Moortgat (3 GPU nodes )

    • moortgat-l40-gpu: School of Earth Sciences-Moortgat (1 GPU node with L40 cards)

    • moortgat-l40s-gpu: School of Earth Sciences-Moortgat (1 GPU node with L40s cards)

  • ogle: Math - Ogle (1 node)

  • ou: Molecular Genetics-Ou (1 node)

  • randaria: Physics-Randaria (4 node GPU nodes)

  • schuler: Linguistics-Schuler. This group has multiple partitions:

    • schuler-mit: 1 node for a collaboration with MIT

  • sullivan: Microbiology-Sullivan (1 node)

  • trivedi: Physics-Trivedi. This group has multiple partitions based on node age (14 nodes):

    • trivedi-broadwell(4 nodes, oldest)

    • trivedi-cascade (4 nodes, older)

    • trivedi-sapphire (4nodes, newer)

    • trivedi-granite (2 nodes, newest)

  • white-1gpu: Linguistics-White (1 node with 1 GPU)

  • white-2gpu: Linguistics-White (3 nodes with 2 GPUs each)

  • xiu: Mathematics-Xiu (1 node)

How to Use Partitions

To ensure your jobs run on your exclusive nodes, you must specify the correct partition when submitting a job.

 

Submitting a Job

  • Use the -p flag for sbatchsalloc, and sinteractive.

  • Example: sbatch -p partition_name my_job.sh

  • If you don't specify a partition, your job will automatically be sent to the default batch partition.

  • To submit a job to multiple partitions, separate the names with a comma: sbatch -p partition_name1,partition_name2 my_job.sh. Your job will run on the first available node in either partition.

 

Recommended Workflow 

For optimal job scheduling, it's best to first submit jobs to your exclusive partition and then use the batch partition as an overflow. This ensures you take full advantage of your group's dedicated resources. DO NOT USE RESOURCES THAT DO NOT HAVE PERMISSION TO USE.

 

Troubleshooting

If you encounter an error like "Job submit/allocate failed: Invalid account or account/partition combination specified," it means you don't have permission to use that partition. If you believe this is an error, contact asctech@osu.edu.

Print Article

Related Articles (1)

Some details such as core count, memory, GPU presence, and shared/exclusive status for each compute node in Unity.