This cluster was funded by what is commonly called the "Condo Compute Model".
The Premise core infrastructure is provided for UNH researchers. The
Premise core infrastructure includes: racks, power distribution,
cooling, InfiniBand Network mesh, Lustre file storage, a head node and
four compute nodes. You may thank; UNH RCC, UNH Central IT, and the
Research Office for providing this infrastructure.
Users compete equally for available resources in a "shared" job queue.
All Premise users are expected to play nicely. An HPC Advisory Board
has been created to provide RCC with direction on enforcement for the
common good. Every attempt will be made to utilize available
Your budget should include "HPC buy-in" funding to satisfy your
projects minimum needs. There is no other way to garantee the
required resources will be available when you need them. The
"Hardware Description" below defines three standard node
configurations (approximate price on 12/1/16): base ($8k), hi-ram
($12.5k), and gpu ($13.5k). Contact RCC for current pricing for your
proposal budget. Your grant retains ownership of any hardware you
Owners are provided a restricted job queue with priority scheduling on
any hardware they own. When no owner priority work exists "shared"
queue jobs may be scheduled on the idle hardware. Owners should
expect active jobs to complete in a "reasonable amount of time". This
might cause wait times for some priority jobs.
Description of Hardware
The Premise cluster is an HPC made up of:
The login/head node is "premise.sr.unh.edu"
14 compute nodes connected together using 56 Gb FDR Infiniband networking
Each node has two 12-core CPUS
All nodes have at least 128GB of main memory
Four nodes have NVidia K80 GPUs
Four nodes have 512GB of main memory
The entire cluster shares 225TB+ of usable Lustre storage.
What is the theoretical performance of this cluster?
CPU performance only
Premise has 14 compute nodes with two CPUs per node for a total of 28
CPUs. Each 12-core "Intel(R) Xeon(R) CPU E5-2680 v3 @2.50GHz" is
rated at 356.50 double precision GFlops.
Premise is managed by UNH Research Computing Center staff. Please
email administrative, technical requests to:
The focus of the Premise cluster is to support UNH Research. If you
are seeking academic student HPC experience we currently suggest using
XSEDE resources. For more information on XSEDE please contact the UNH
Campus Champion "Grace Wilson Caudill" Grace.WilsonCaudill@unh.edu
Utilize Premise for Research
Establish a Premise account
Create a Premise account by emailing UNH Research Computing Center staff at:
For account creation requests we suggest the following information:
Email and phone:
Requested login id:
Expected use case / Research area:
(Other relevant information)
Examples of other relevant information:
Planning to use Premise for my grants: X & Y
Doing preliminary work for a grant submission to X
I am working with Professor X on project Y
If possible use my account from RCC system X
Please set up my account like user X
I plan to install software X in my home area.
Can open source software package X (available from Y) be installed?
Connecting to Premise
The only way to connect to Premise is by using a SecureSHell
connection. You will need an SSH Client program on a your internet
connected computer to reach Premise. Often this can be done on your
local command line by typing:
You may need to install an SSH Client on
HPC software is most often field specific. You probably have a better
idea of where to look for relevant software tools in your field that
we do, but you are welcome to ask RCC what we might know.
If you are bringing your own source code or use common Linux tools
they may already exist on the cluster. Some software packages may be
available as "modules" (more information available below).
How do I run my program
You should not run your programs directly on the Premise login head
First copy any required data onto the Premise system, preferably in a
subdirectory of your home area. Your same home area is mounted on all
of the Premise nodes.
Once you have the application and necessary data ready you will submit
it as a job into the batch system using Slurm. For details on using
Slurm start with the local slurm usage notes.
You may ask other Premise questions via email to
firstname.lastname@example.org. This is RCC's general
support email, so please indicate that your question is related to
Provides a list of modules available on this cluster
module load X
Load the package X into the current shells environment. If more than one version is available it is specified as X/verion.
Display the list of packages currently loaded in this shell.
Users of Matlab on the Premise compute cluster should not run
graphically on the Premise head node. Unlike running on your desktop,
matlab jobs must be submitted to the Slurm job queue. A helper script
has been created to submit your matlab.m scripts for you.
Use "sMATLAB.py --help" to describe available options and defaults.
Adding the "--verbose" option to sMATLAB.py displays both the Slurm
sbatch command line and helper job script that is being generated for you.
This could be used as a starting point for users wishing to create
their own Slurm scripts.
Note that Matlab does not automatically use all the cores on a node,
or split a job accross multiple nodes for you. These features must be
coded into your scripts. Some WEBDOC links exist in the autogenerated
script that might be helpful.
Here is an example matlab script utilizing "parfor" to iterate work
accross all the cores on one node. Using Matlab on more than one node is not
supported. Premise nodes currently all have 24 cores.
parpool(str2num(getenv('SLURM_JOB_CPUS_PER_NODE')));% workers=24 cores per Premise nodetic% start timerticBytes(gcp);% time should include distribution transfersn= 1024;A=zeros(n);parfor(i= 1:n)% Distribute these "n" iterations over workers in parpool.A(i,:)=(1:n).*sin(i*2*pi/1024);endtocBytes(gcp)% timer should include collection transferstoc% stop & display elapsed time.
namd -- Parallel molecular dynamics simulation of biomolecular systems. website
gromacs/gromacs-5.1.2 -- package to perform molecular dynamics. website
vmd -- Molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics. website