User Tools

Site Tools


language_lab:cluster

SoT cluster - the School of Technology computing cluster


Smallvoice, uses Slurm workload manager to create a computing cluster

When logged on to the cluster user is always on the login node, called freedom and should do all his work there.
Home folder for all users are hosted on a NFS server, so every nodes have the same “physical” disks
All user-jobs should run using slurm sbatch job.sh, please do not run job locally on the login node

The computing (slurm) environment

There are 2 partitions (queue's)

NameNodesGPUTimelimit
basic3Nvidia A100 GPU31 hours
lvlWork3Nvidia A100 GPUno limit

The default queue for both student and staff is basic, so it's not necessery to choose a queue in your script file, but it's possible to specify a different one.

Installed software and drivers

* NVIDIA A100 GPU drivers
* CuDA toolkit [version 11.7]
* Python 3.9.7
* pip 20.3.4
* ffmpeg + sox

If additional software is needed or different version, you can ask sysadmin (help@ru.is) for assistance

language_lab/cluster.txt · Last modified: 2024/10/14 14:24 by 127.0.0.1