lvl_cluster:info
This is an old revision of the document!
Language and Voice lab computing cluster
Smallvoice, uses Slurm workload manager to create a computing cluster
The Cluster has 6 nodes
Node name | Role(s) |
---|---|
atlas | management node, worker |
freedom | login node, worker |
hercules | worker node |
samson | worker node |
goliath | worker node |
obelix | worker node |
When logged on to the cluster user is always on the login node, freedom and does all his work there
but home (& work) are the same “physical” disk on all nodes
To use slurm workload manager for your job, you first create a batch file with info about the job
and the run your job(submit) with sbatch myJob.sh
Example batch file
#!/bin/bash
#SBATCH --account=staff
#SBATCH --job-name=MyJob
#SBATCH --gpus-per-node=1
#SBATCH --mem-per-cpu=2G
#SBATCH --output=myBatch.log
lvl_cluster/info.1671451900.txt.gz · Last modified: 2024/10/14 14:24 (external edit)