Smallvoice, uses Slurm workload manager to create a computing cluster
When logged on to the cluster user is always on the login node, called freedom and should do all his work there.
Home folder for all users are hosted on a NFS server, so every nodes have the same “physical” disks
All user-jobs should run using slurm sbatch job.sh, please do not run job locally on the login node
There are 2 partitions (queue's)
Name | Nodes | GPU | Timelimit |
---|---|---|---|
basic | 3 | Nvidia A100 GPU | 31 hours |
lvlWork | 3 | Nvidia A100 GPU | no limit |
The default queue for both student and staff is basic, so it's not necessery to choose a queue in your script file, but it's possible to specify a different one.
* NVIDIA A100 GPU drivers
* CuDA toolkit [version 11.7]
* Python 3.9.7
* pip 20.3.4
* ffmpeg + sox
If additional software is needed or different version, you can ask sysadmin (help@ru.is) for assistance