Welcome to Clipper

Tags Clipper

Welcome to the Clipper High Performance Computing cluster at Grand Valley State University.

Clipper is a collection of Dell PowerEdge servers and storage communicating through high-speed Ethernet and InfiniBand networking. Use of the cluster’s shared resources is controlled by the Slurm workload manager.

The cluster is available for current students, faculty, and staff for use in their research or teaching.

Management of Clipper is a collaborative effort between members of GVSU’s Academic Research Computing and Enterprise Architecture teams, with input from several School of Computing faculty.

Gaining Access to Clipper

Request access to Clipper through the GVSU Services portal. Faculty or staff sponsorship is required for students wishing to use the cluster.

Once access has been granted, connect to Clipper’s login node.

Cluster Summary

Operating System Red Hat Enterprise Linux 9.3
Scheduler Slurm 23.11.6
Total Nodes 12 CPU, 8 GPU (20 total)
Total CPU / Cores 48 / 800
Total Memory 15 TB
Total GPU / vRAM 16 / 640 GB
Total Storage 185 TB SSD / 146 TB HDD
Interconnect Ethernet (10 Gbps), InfiniBand HDR (100 Gbps)

Resource Summary

A small portion of resources are reserved for normal system operations, so the total available cores and memory for scheduling tasks will be less than these totals.

Nodes CPU Cores Memory Local SSD GPU
b[001-004] (4) Intel Xeon Gold 6226 CPU @ 2.70GHz 48 1.5 TB 1.8 TB N/A
c[001-006] (2) Intel Xeon Gold 6248 CPU @ 2.50GHz 40 768 GB 1.8 TB N/A
c[007-008] (2) Intel Xeon Gold 6238R CPU @ 2.20GHz 56 768 GB 1.8 TB N/A
g[001-004] (2) Intel Xeon Gold 6226 CPU @ 2.70GHz 24 384 GB 888 GB (2) NVIDIA Tesla V100s
g[005-008] (2) Intel Xeon Gold 6230 CPU @ 2.10GHz 40 384 GB 888 GB (2) NVIDIA Quadro RTX 8000

Filesystem Summary

A variety of filesystems are available for short and long term data retention. Usage limits may be imposed.

Filesystem Purpose Total Space
/mnt/home Long-term, fast access, user home folders 20 TB
/mnt/projects Long-term, fast access, project data 35 TB
/mnt/scratch Short-term, fast access, large data 150 TB
/mnt/archive Long-term, slow access, large data 146 TB
/tmp Temp storage, on each node 800GB

 

Was this helpful?
0 reviews