The Student HPC Platform has been enhanced to provide students with a more accessible and user-friendly computing environment. Starting from 1st August, 2023, users will have access to a new HPC Platform that includes an additional web environment.
While users can still use the traditional SSH access method, the new web-based interface offers a seamless and intuitive experience for developing and running applications. Built-in applications with Web GUI interfaces, such as Jupyter Notebook/JupyterLab and RStudio, make it easy for students to develop their applications via web browser.
The new platform also offers easy-to-use interfaces for accessing the HPC platform via web browser, including traditional SSH access and a GUI-based desktop. This provides students with the performance of CPU and GPU resources for applications in data visualization, simulations, data analytics, modelling, and more.
In addition, the new web portal offers easy file management capabilities, allowing users to upload, download, and manage their files and data quickly and efficiently without additional tools except for their web browser. The desktop experience is designed to be familiar to students, enabling them to work in an environment that they are comfortable with.
To get started, simply click here to create your account without any fees. Once you have created your account, please visit here to start exploring the new platform. We look forward to seeing you there!
Hardware and Software Resource
(Effective from 1 Aug 2023)
Hardware
Total No of CPU Cores
|
Over 100 cores
|
Total Memory Size
|
Over 2TB
|
GPU Cards
|
4 x NVIDIA RTX 3080 Ti
|
*We are upgrading the CPU and GPU resources on the Student HPC Platform, which we aim to complete by 2023.
Software
OpenHPC Stack
Category
|
Component
|
Base OS
|
Rocky Linux Release 8.6 x86_64
|
Compilers
|
GNU9.4.0 (gcc, g++, gfortran)
GNU12.2.0 (gcc, g++, gfortran)
|
MPI libraries
|
OpenMPI, MPICH
|
Software provisioner
|
Lmod
|
Resource manager
|
SLURM, Munge
|
Serial/Threaded/Parallel Libraries
|
Boost, FFTW, Hypre, PETSc, Scalapack, LAPACK, OpenBLAS, SuperLU, Trilinos
|
IO libraries
|
HDF5, NetCDF, Adios
|
Development tools
|
Autoconf, Automake, Libtool, Valgrind
|
Debugging and profiling tools
|
TAU, Likwid, Dimemas
|
Installed Applicatons
Application
|
Versions
|
Anaconda3
|
Anaconda3/2023.03-1
|
Apptainer |
1.1.8 |
CUDA |
12.1 |
FSL |
6.0.6.5 |
Go |
1.20.4 |
Gromacs |
2023.1
2023.1-GPU
|
LAMMPS |
28Mar2023
28Mar2023-GPU
|
NVHPC |
23.5 |
Plumed |
2.9.0 |
Quantum Espresso |
7.2 |
Singularity |
3.11.3 |
Job Queue Configuration
Queue name
|
No. of nodes
|
No. of CPU cores
per node
|
Usable Memory (GB)
per node
|
No. of GPU cards
|
Status
|
h07q1
|
10
|
24
|
500
|
N/A
|
4 nodes in service,
others will be available
by end of October
|
h07gpuq1
|
1
|
24
|
256
|
4
|
In service
|
h07q2 |
8 |
36 |
500 |
N/A |
4 nodes in service,
others will be available
by end of September
|
Resource Limits
User Disk Quota: 70GB (18GB for pre-installed Python packages)
For General Job Submission:
Queue name
|
Max no. of CPU Cores
per job
|
Max concurrent job
able to be run per user
|
Max concurrent job
able to be submitted per user
|
No. of GPU Cards
per job
|
Maximum run time
limit for jobs (hrs)
|
h07q1
|
24
|
2
|
3
|
N/A
|
72
|
h07gpuq1
|
6
|
1
|
2
|
1
|
72
|
Jupyter Notebook/Lab App:
Queue name
|
Max no. of CPU Cores
per job
|
Max no. of Memory (GB)
per job
|
Maximum runtime/
walltime (hours)
|
Minimum CPU cores
per job
|
Minimum memory (GB)
per job
|
h07q1
|
8
|
128
|
48
|
2
|
4
|
h07gpuq1
|
6
|
64
|
36
|
2
|
4
|
RStudio App:
Queue name
|
Max no. of CPU Cores
per job
|
Max no. of Memory (GB)
per job
|
Maximum runtime/
walltime (hours)
|
Minimum CPU cores
per job
|
Minimum memory (GB)
per job
|
h07q1
|
8
|
128
|
48
|
2
|
4
|
h07gpuq1
|
6
|
64
|
36
|
2
|
4
|
Desktop App:
Queue name
|
Max no. of CPU Cores
per job
|
Max no. of Memory (GB)
per job
|
Maximum runtime/
walltime (hours)
|
Minimum CPU cores
per job
|
Minimum memory (GB)
per job
|
h07q1
|
8
|
128
|
36
|
2
|
4
|
h07gpuq1
|
6
|
64
|
36
|
2
|
4
|