lvl_cluster:info
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
lvl_cluster:info [2022/12/19 12:34] – bjorns | lvl_cluster:info [2023/01/04 09:09] (current) – removed bjorns | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | =====Language and Voice lab computing cluster===== | ||
- | | ||
- | The Cluster has 6 nodes\\ | ||
- | ^Node name ^ Role(s) ^ | ||
- | |**atlas**|management node, worker| | ||
- | |**freedom**|**login** node, worker| | ||
- | |hercules|worker node| | ||
- | |samson|worker node| | ||
- | |goliath|worker node| | ||
- | |obelix|worker node| | ||
- | \\ | ||
- | \\ | ||
- | When logged on to the cluster user is always on the login node, **freedom** and does all his work there\\ but home (& work) are the **same** " | ||
- | To use slurm workload manager for your job, you first create a executable batch file with info about the job\\ | ||
- | and the run your job(submit) with sbatch myJob.sh\\ | ||
- | Example batch file\\ | ||
- | ''# | ||
- | #SBATCH < | ||
- | #SBATCH %%--%%job-name=MyJob\\ | ||
- | #SBATCH %%--%%gpus-per-node=1\\ | ||
- | #SBATCH %%--%%mem-per-cpu=2G\\ | ||
- | #SBATCH < | ||
- | '' | ||
- | Create with : '' | ||
- | |||
- | **NVIDIA** A100 GPU, **drivers** and **CuDA toolkit** [version 11.7]\\ |
lvl_cluster/info.1671453245.txt.gz · Last modified: 2024/10/14 14:24 (external edit)