Welcome to the Clipper High Performance Computing cluster at Grand Valley State University.
Clipper is a collection of Dell PowerEdge servers and storage communicating through high-speed Ethernet and InfiniBand networking. Use of the cluster’s shared resources is controlled by the Slurm workload manager.
The cluster is available for current students, faculty, and staff for use in their research or teaching.
Management of Clipper is a collaborative effort between members of GVSU’s Academic Research Computing and Enterprise Architecture teams, with input from several School of Computing faculty.
Gaining Access to Clipper
Request access to Clipper through the GVSU Services portal. Faculty or staff sponsorship is required for students wishing to use the cluster.
Once access has been granted, connect to Clipper’s login node.
Cluster Summary
| Operating System |
Red Hat Enterprise Linux 9.5 |
| Scheduler |
Slurm 24.11.4 |
| Total Nodes |
12 CPU, 11 GPU (23 total) |
| Total CPU / Cores |
54 / 984 |
| Total Memory |
16.75 TB |
| Total GPU / vRAM |
22 / 1.2 TB |
| Total Usable Storage |
150 TB SSD / 146 TB HDD |
| Interconnect |
Ethernet (10 Gbps), InfiniBand HDR (100 Gbps) |
Resource Summary
A small portion of resources are reserved for normal system operations, so the total available cores and memory for scheduling tasks will be less than these totals. Every job submitted has a max time of five days to complete before auto terminated.
Partition: bigmem
b[001-004]
|
(4) Intel Xeon Gold 6226 CPU @ 2.70GHz |
48 |
1.5 TB |
1.8 TB |
N/A |
Partition: cpu
c[001-006]
|
(2) Intel Xeon Gold 6248 CPU @ 2.50GHz |
40 |
768 GB |
1.8 TB |
N/A |
Partition: cpu
c[100-101]
|
(2) Intel Xeon 6730P CPU @ 2.50GHz |
64 |
512 GB |
3.5 TB |
N/A |
Partition: gpu
g[001-004]
|
(2) Intel Xeon Gold 6226 CPU @ 2.70GHz |
24 |
384 GB |
888 GB |
(2) NVIDIA Tesla V100s |
Partition: gpu
g[005-008]
|
(2) Intel Xeon Gold 6230 CPU @ 2.10GHz |
40 |
384 GB |
888 GB |
(2) NVIDIA Quadro RTX 8000 |
Partition: gpu
g[050-052]
|
(2) Intel Xeon Gold 6238R CPU @ 2.20GHz |
56 |
768 GB |
1.8 TB |
(2) NVIDIA H100 NVL |
Filesystem Summary
A variety of filesystems are available for short and long term data retention. Usage limits may be imposed. Every user will have a home folder limit of 50 GB and every project folder will have a storage limit of 2TB. Exceptions to these limits may be granted in special cases where additional resources are required for critical research or projects.
| /mnt/home |
Long-term, fast access, user home folders |
20 TB |
| /mnt/projects |
Long-term, fast access, project data |
90 TB |
| /mnt/scratch |
Short-term, fast access, large data |
40 TB |
| /mnt/archive |
Long-term, slow access, large data |
146 TB |
| /tmp |
Temporary storage for the life of a single job, on each node |
888 GB - 3.5 TB (depends on node) |