User Tools

Site Tools


lvl_cluster:info

This is an old revision of the document!


Language and Voice lab computing cluster

Smallvoice, uses Slurm workload manager to create a computing cluster
The Cluster has 6 nodes

Node name Role(s)
atlasmanagement node, worker
freedomlogin node, worker
herculesworker node
samsonworker node
goliathworker node
obelixworker node


When logged on to the cluster user is always on the login node, freedom and does all his work there
but home (& work) are the same “physical” disk on all nodes
To use slurm workload manager for your job, you first create a executable batch file with info about the job
and the run your job(submit) with sbatch myJob.sh

Example batch file
#!/bin/bash
#SBATCH --account=staff
#SBATCH --job-name=MyJob
#SBATCH --gpus-per-node=1
#SBATCH --mem-per-cpu=2G
#SBATCH --output=myBatch.log

Create with : vi myJob.sh + [save] + chmod +x myJob.sh
but this example file is available from user home folder with cat myJob.sh

Installed software and drivers

* NVIDIA A100 GPU drivers * CuDA toolkit [version 11.7]
* Python 3.9.2 * pip 20.3.4 Intel oneAPI Math Kernel Library

lvl_cluster/info.1671453624.txt.gz · Last modified: 2024/10/14 14:24 (external edit)