- Home
- About
- Research
- Education
- News
- Publications
- Guides-new
- Guides
- Introduction to HPC clusters
- UNIX Introduction
- Nova
- HPC Class
- SCSLab
- File Transfers
- Cloud Back-up with Rclone
- Globus Connect
- Sample Job Scripts
- Containers
- Using DDT Parallel Debugger, MAP profiler and Performance Reports
- Using Matlab Parallel Server
- JupyterLab
- JupyterHub
- Using ANSYS RSM
- Nova OnDemand
- Python
- Using Julia
- LAS Machine Learning Container
- Support & Contacts
- Systems & Equipment
- FAQ: Frequently Asked Questions
- Contact Us
- Cluster Access Request
SCSLab
SCSLab cluster consists of four GPU Compute nodes and a login node.
The storage node for SCSLab is also the login node.
All nodes and storage are connected via Ethernet switch.
Detailed Hardware Specification
Nodes | Processors per Node | Cores per Node | Memory per Node | Accelarator Cards per Node | Type |
---|---|---|---|---|---|
2 | 2 x 24 Core Intel Xeon Silver 4214 | 48 | 192G | 2 x Nvidia Titan RTX | Compute |
1 | 2 x 16 Core Intel Xeon CPU E5-2620 v4 | 32 | 128G | 4 x Nvidia TITAN X (Pascal) | Compute |
1 | 2 x 16 Core Intel Xeon CPU E5-2620 v4 | 32 | 256G | 4 x Nvidia Tesla P40 | Compute |
1 | 2 x 4 Core Intel Xeon CPU E5-1620 v4 | 8 | 64G | N/A | Login/Storage |