- Home
- About
- Research
- Education
- News
- Publications
- Guides-new
- Guides
- Introduction to HPC clusters
- UNIX Introduction
- Nova
- HPC Class
- SCSLab
- File Transfers
- Cloud Back-up with Rclone
- Globus Connect
- Sample Job Scripts
- Containers
- Using DDT Parallel Debugger, MAP profiler and Performance Reports
- Using Matlab Parallel Server
- JupyterLab
- JupyterHub
- Using ANSYS RSM
- Nova OnDemand
- Python
- Using Julia
- LAS Machine Learning Container
- Support & Contacts
- Systems & Equipment
- FAQ: Frequently Asked Questions
- Contact Us
- Cluster Access Request
What is an HPC cluster
An HPC cluster is a collection of many separate servers (computers), called nodes, which are connected via a fast interconnect.
There may be different types of nodes for different types of tasks.
Each of the HPC clusters listed on this site has
- a headnode or login node, where users log in
- a specialized data transfer node
- regular compute nodes (where majority of computations is run)
- "fat" compute nodes that have at least 1TB of memory
- GPU nodes (on these nodes computations can be run both on CPU cores and on a Graphical Processing Unit)
- an Infiniband switch to connect all nodes
All cluster nodes have the same components as a laptop or desktop: CPU cores, memory and disk space. The difference between personal computer and a cluster node is in quantity, quality and power of the components.
Users login from their computers to the cluster headnode using the ssh program.
Next: HPC clusters at ISU