Body
Summary
Clipper is now available for classroom use, primarily for teaching rather than research. This allows students to gain hands-on experience using an HPC environment.
Allocated Resources
Students have access to a dedicated class partition with the following total resources:
- 160 CPUs
- 160 GPU "shards"
- 1.5 TB of memory
Each student may submit jobs with a maximum of:
- 4 CPUs
- 4 GPU "shards"
- 32 GB of memory
Sharding is a functionality of the Slurm workload manager which allows for sharing of a single GPU resource among many jobs. It is not possible to request an entire GPU on the class partition. Please see Using GPUs on Clipper for information on requesting a shard. GPU jobs started through OpenOnDemand are automatically allocated four shards.
If a student submits a single job that requests more than the specified limits, it will be automatically rejected by Slurm.
Jobs from the same user are managed such that their combined resource usage cannot exceed the per-job limits. If multiple jobs collectively exceed these limits, they will queue and execute sequentially as resources free up.
When all available class partition resources are being used, any additional job submissions will be queued until sufficient resources become available. For example, if all 160 CPUs are currently in use, newly submitted jobs will queue until resources become available.
Accessing the HPC Cluster
If you are connected to the student wifi you will need to be on the VPN.
To access the cluster for classwork, use the provided login link. After signing in, you can run either batch jobs from shell access or interactive apps.

Using Interactive Apps on Clipper
Clipper has the following interactive apps installed:
- JUPYTER SERVER
- VSCODE
- RSTUDIO SERVER
- CLIPPER DESKTOP(MATLAB, ANSYS, STARCCM+)
To access interactive apps, click on the menu item as shown below and this will populate the apps that you are allowed to run based on assigned permissions.

Submitting a Batch Job
Refer to this knowledge base article Submitting a job for instructions on submitting a basic job. Be sure to change the partition setting to class instead of short. Other examples specific to the class partition are below.
Students will be unable to submit to other partitions available on Clipper. If a student attempts to submit to a partition other than class, they will receive the following error message from Slurm:
sbatch: error: Batch job submission failed: Invalid account or account/partition combination specified
Example Job without GPU
#!/bin/bash
#SBATCH --job-name=my_job # job name
#SBATCH --output=output_%j.txt # output file name (%j expands to jobID)
#SBATCH --time=01:00:00 # time limit (hh:mm:ss)
#SBATCH --ntasks=1 # number of tasks (normally 1, unless mpi)
#SBATCH --cpus-per-task=4 # cpu cores per task
#SBATCH --mem=32G # memory per node
#SBATCH --partition=class # must be class
# Load any necessary modules
# Commands to execute your job
echo "A class job using all available resources EXCEPT gpu"
sleep 10
Example Job with GPU
#!/bin/bash
#SBATCH --job-name=my_job # job name
#SBATCH --output=output_%j.txt # output file name (%j expands to jobID)
#SBATCH --time=01:00:00 # time limit (hh:mm:ss)
#SBATCH --ntasks=1 # number of tasks (normally 1, unless mpi)
#SBATCH --cpus-per-task=4 # cpu cores per task
#SBATCH --mem=32G # memory per node
#SBATCH --partition=class # must be class
#SBATCH --gres=shard:4 # request 4 gpu shards
# Load any necessary modules
# Commands to execute your job
echo "A class job using all available resources"
sleep 10
File Management
To upload or download files from the cluster, click on the file management tile on the main screen:

This will open a new window where you can use the top menu bar to upload, download, or delete files as needed.
