lvl_cluster:info
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
lvl_cluster:info [2023/01/02 12:55] – bjorns | lvl_cluster:info [2023/01/04 09:09] (current) – removed bjorns | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | =====Language and Voice lab computing cluster===== | ||
- | | ||
- | The Cluster has 6 nodes :\\ | ||
- | ^Node name ^ Role(s) ^ | ||
- | |**atlas**|management node, worker| | ||
- | |**freedom**|**login** node, worker| | ||
- | |hercules|worker node| | ||
- | |samson|worker node| | ||
- | |goliath|worker node| | ||
- | |obelix|worker node| | ||
- | \\ | ||
- | When logged on to the cluster user is always on the login node, **freedom** and does all his work there\\ but home (& work) are the **same** " | ||
- | To use slurm workload manager for your job, you first create a executable batch file with info about the job\\ | ||
- | and the run your job(submit) with sbatch myBatch.sh\\ | ||
- | |||
- | Example batch file\\ | ||
- | ''# | ||
- | #SBATCH < | ||
- | #SBATCH %%--%%job-name=MyJob\\ | ||
- | #SBATCH %%--%%gpus-per-node=1\\ | ||
- | #SBATCH %%--%%mem-per-cpu=2G\\ | ||
- | #SBATCH < | ||
- | '' | ||
- | \\ | ||
- | Create with : '' | ||
- | \\ | ||
- | == The computing (slurm) environment == | ||
- | There are 3 queue' | ||
- | allWork (for staff only)\\ | ||
- | doTrain (for staff only)\\ | ||
- | beQuick (for students)\\ | ||
- | \\ | ||
- | The default queue is doTrain so when using slurm it's not necessery to choose a queue, but it's possible to specify another one.\\ | ||
- | \\ | ||
- | ===Installed software and drivers=== | ||
- | * **NVIDIA** A100 GPU drivers\\ | ||
- | * **CuDA toolkit** [version 11.7]\\ | ||
- | * Intel oneAPI Math Kernel Library\\ | ||
- | * Python 3.9.2\\ | ||
- | * pip 20.3.4\\ | ||
lvl_cluster/info.1672664102.txt.gz · Last modified: 2024/10/14 14:24 (external edit)