USNH Premise Cluster UNH Logo

Table of contents

Overview of the Premise cluster

The Premise High-Performance Computing (HPC) cluster is a collection of USNH servers dedicated to processing research related computational analysis.

Funding

This cluster was funded by what is commonly called the "Condo Compute Model".

For Free?

The Premise core infrastructure is provided for USNH researchers. The Premise core infrastructure includes: racks, power distribution, cooling, network connectivity, file storage, and six servers. This shared infrastructure has been funded by ET&S and REEO.

A "shared" job queue is available for all Premise users. The Technology Governance Committee regularly reviews usage to ensure equitable availability of all HPC resources.

Buy-in

Your budget should include "HPC buy-in" funding to satisfy your projects minimum needs. The "Hardware Description" below defines three standard node configurations (approximate price of R760 nodes on 12/1/23): base ($20k), hi-ram ($25k), and gpu ($28+k). Contact RCC for current pricing for your proposal budget. Your grant retains ownership of any hardware you purchase.

Owners are provided a restricted job queue with priority scheduling on any hardware they own. When no owner priority work exists "shared" queue jobs may be scheduled on the idle hardware. Owners should expect active jobs to complete in a "reasonable amount of time". This might cause wait times for some priority jobs.

Citation & Proposal language

Please acknowledge the use of Premise in your papers like this:

You may find the following narrative description of Premise useful when writing proposals:

Description of Hardware

The Premise cluster is an HPC made up of:

See purchase history for more detail.

CPU performance only

Premise has 67 compute nodes with CPUs that have differing core counts for a total of over 2,200 cores. Individual nodes have computing power that range from almost 500 GFlops/sec all the way to over 7TFlops/sec

(Total CPU performance) 
  = ( 1 nodes) * ( 48 cores) * (2.80 GHz) * (16 Flops/cycle) =  2150 GFlops
  + ( 1 nodes) * ( 64 cores) * (2.80 GHz) * ( 8 Flops/cycle) =  2867 GFlops
  + ( 2 nodes) * ( 64 cores) * (2.50 GHz) * ( 8 Flops/cycle) =  2260 GFlops
  + (27 nodes) * ( 24 cores) * (2.50 GHz) * ( 8 Flops/cycle) = 12960 GFlops
  + ( 4 nodes) * ( 24 cores) * (2.60 GHz) * (32 Flops/cycle) =  7987 GFlops
  + ( 1 nodes) * ( 80 cores) * (2.10 GHz) * (32 Flops/cycle) =  5376 GFlops
  + ( 1 nodes) * ( 40 cores) * (2.10 GHz) * (32 Flops/cycle) =  2688 GFlops
  + ( 5 nodes) * ( 32 cores) * (2.80 GHz) * (32 Flops/cycle) = 14336 GFlops
  + ( 9 nodes) * ( 40 cores) * (2.50 GHz) * (32 Flops/cycle) = 28800 GFlops
  + ( 6 nodes) * (112 cores) * (2.00 GHz) * (32 Flops/cycle) = 43008 GFlops
  + (10 nodes) * ( 64 cores) * (2.50 GHz) * (32 Flops/cycle) = 51200 GFlops
  = (173632 GFlops) 
  = (173.632 TFlops)

GPU performance only

Nine of the compute nodes of the Premise cluster each contain two NVIDIA k80 GPU cards. One compute node contains a single NVIDIA v100 GPU card. Fourteen compute nodes each contain a single NVIDIA a100 GPU card.

(Total GPU performance)
  = ( 10 nodes) * (2 k80  GPU/node) * (1.87 TFlops/GPU) =  37.4 TFlops 
  = ( 1 nodes) * (1 v100 GPU/node) * (7    TFlops/GPU) =   7.00 TFlops
  = (14 nodes) * (1 a100 GPU/node) * (9.7  TFlops/GPU) = 135.80 TFlops
  = (176.46 TFlops)

Additionally, if leveraged, the fourteen NVIDIA a100 GPUs have Tensor Cores that add an additional 39.20 TFlops to the maximum theoretical performance.

CPUs + GPUs

(Combined Performance)
  = (Total CPU performance) + (Total GPU performance)
  = (173.632 TFlops) + (180.2 TFlops + 137.20 TFlops) 
  = (487.232 TFlops)

Usage

Premise is managed by USNH Research Computing Center staff. Please email all requests to: rcc.support@unh.edu The focus of the Premise HPC cluster is to support USNH research. If you're seeking academic services, please contact the RCC for other options.

Utilize Premise for Research

Establish a Premise account

Create a Premise account by emailing USNH Research Computing Center staff at: rcc.support@unh.edu

Getting started

HPC software is most often field specific. You probably have a better idea of where to look for relevant software tools in your field that we do, but you are welcome to ask RCC what we might know.

If you are bringing your own source code or use common Linux tools they may already exist on the cluster. Some software packages may be available as "modules" (more information available below).

Connecting to Premise

Premise is accessible via SSH, and will require an SSH client to remotely connect to it. Further details will be provided during the Premise overview session.

How do I run my program

The RCC provides every new Premise user with an overview session. This session takes around an hour and provides users an overview of Premise, how to connect, interacting with Slurm, running software, transferring files, and other considerations specific to the user's research work. Further support is always available via email: rcc.support@unh.edu

Visualize Current Usage

RCC chose XDMoD to visualize Premise usage. The Premise XDMoD webpage may be viewed when on campus or using the VPN.

Premise Software Usage

For Premise software usage, please see here.