Unity News of the Day

Tags unity slurm

This page contains upcoming and recent news, announcements and other information.

This page formerly contained some information that belonged on the FAQ page. Apologies if you've been directed to this page in error; please see the FAQ for more information.

Scheduled maintenance

We have scheduled maintenance two to four times a year.

Two weeks before scheduled maintenance, the Slurm scheduler will begin holding submitted jobs that could run past the beginning of the maintenance window (shorter jobs will run normally). If you see a status like (ReqNodeNotAvail, Reserved for maintenance) for a job in the output of the squeue command, that's what's going on.

December 22-23, 2025 maintenance complete

  • New Slurm scheduling software version.
  • Revised module system (see below).
  • Operating system patches.
  • Hardware patches.

Change to module system for application software

We are moving to a shared software stack between the Unity cluster and all College of Arts and Sciences (ASC) Linux workstations. This provides full compatibility across all College compute resources, ensuring that the code you develop on a local ASC workstation will run identically on Unity compute nodes.

Accessing Legacy Modules

Because we are changing how software is organized, your existing module load commands may fail initially. If you encounter an issue with a piece of software not loading, you can restore access to the legacy modules by adding the following line to your interactive login scripts (e.g., .bashrc) or your SLURM submission scripts:

module use /fs/project/unity-lmod-2025/share/lmodfiles/Core

Note: Using this command allows your environment to "see" and access the legacy software modules while we work on the transition.

Special note on Jupyter

When you start a Jupyter notebook from OnDemand, you need to specify a Python instance. The default Python instance still points to the old module system, so you'll need to put the "module use" line above in your .bashrc file until we set a new default.

Special note on conda environments

The new module system does not include conda; instead, we recommend that you set up an instance in your home directory. In particular, we recommend mamba, which essentially is a faster, more efficient version of conda. The conda-forge github page has good instructions for this; we'll walk through them here. The two commands we need are found under the "Unix-like platforms" subsection; although simple, it's a good idea to run these on a compute node.

First, fetch the miniforge installer:

wget "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"

Then run the installer:

bash Miniforge3-$(uname)-$(uname -m).sh

After accepting the license, miniforge installs. After installation is complete, it offers you the opportunity to add initialization stanza in your .bashrc file (really just sets up some environment variables); it's convenient to allow that, so that every new shell will have conda/mamba available without loading an additional conda module or doing the module use work-around.

Special note on R 4.4.0

The R/4.4.0 module in the old module system does not load properly, even after doing the module use command above. Since we intend to end support of the old module system within the next six months, we propose three reasonable choices:

  1. Start using the R module in the new module system. This currently is version 4.5.1, so you'll have to reinstall all of your R packages.
  2. Run R 4.4.0 (or whatever version you want) from a conda environment.
  3. Run R in an Apptainer container.

Since we just talked about conda environments above, we'll document that approach here. Again, since the process can take awhile, please run this on a compute node, not a login node.

To create an environment for R 4.4.0, use this command:

mamba create -n r44-base r-base=4.4.0

or to get R with some common packages

mamba create -n r44-essentials r-essentials=4.4.0

The base R environment occupies about 1 GB in the home directory; the R essentials environment, 2.5 GB.

To use an environment (and the R in it):

mamba activate r44-essentials
R
Software Migration Requests & Timeline

Our goal is to move all necessary tools into the new unified system. If you find yourself needing to use the legacy path, please let us know by submitting a request here:

👉 ASC Software Migration Request Portal

Thank you for your patience as we work to provide a more consistent and powerful computing environment for the College.

 

December 17, 2024 maintenance complete

  • OS updates – new kernel and security updates (RHEL8.10+)
  • Firewall and Network switch updates (ASC fileservers will be unavailable)
  • SLURM scheduler updated to 23.11.10
  • New default versions for various software package in module system.

 

May 6, 2024 maintenance complete

  • The operating system of all nodes, including login and compute nodes, updated to RedHat 8.9.
  • Home directories moved to our new storage system (project space has been on this storage system for several months).
  • New default versions of several modules.
  • A number of compiled programs have been affected by the update from RedHat 7.9 to 8.9. If you see a new error about a missing library, this may be the problem. Please contact us for a fix.
  • R/4.4.0. Around the time of this maintenance, a vulnerability in versions of R earlier than 4.4.0 was reported. Most of the existing versions of R on Unity are affected by the missing library error (above). Until we have a clear path forward, we encourage use of 4.4.0. If that's a problem for you, please contact us. See also OSC's statement on the issue.

December 28, 2023 maintenance complete

  • Operating system update to RedHat 7.9
  • Kernel updates
  • BIOS updates
  • New version of Slurm
  • New CUDA drivers
  • New defaults for applications with new versions added since last maintenance.

August 21, 2023 maintenance complete

Updates and rolling reboots of all nodes.

January 5, 2023 maintenance complete

  • RedHat patches and updates to RedHat7.9
  • NVIDIA CUDA driver 525.78.01
  • Mathematica 13.1 default
  • Matlab 2022b default
  • New Project storage is now available to all customers. All existing customers (past 5 years) with shares located in /fs/project/$name have been upgraded to the new storage environment.  The new storage cluster has 4x10Gb uplinks to the Unity cluster and allows much faster data transfers.  If you have a need for additional storage, please contact asctech@osu.edu.

 

Print Article

Related Articles (3)

How to contact ASCTech to get help using Unity.
Examples of basic uses of Unity.
The very first steps to get started with Unity.