User Tools

Site Tools


compute:step2

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
compute:step2 [2023/04/24 10:23] bjornscompute:step2 [2024/10/14 14:24] (current) – external edit 127.0.0.1
Line 3: Line 3:
 Most users are using Python for their jobs\\ Most users are using Python for their jobs\\
 Setup your environment, Python virtual environment/Anaconda and install in you home folder the necessary libraries for your project\\ Setup your environment, Python virtual environment/Anaconda and install in you home folder the necessary libraries for your project\\
 +\\
 When you are ready to send your job to the slurm queue, adjust the slurm command script\\ When you are ready to send your job to the slurm queue, adjust the slurm command script\\
 Slurm will put you on the queue, if all the resources you ask for are available you will go to Run(R) state, otherwise it will be Pending(Pd)\\ Slurm will put you on the queue, if all the resources you ask for are available you will go to Run(R) state, otherwise it will be Pending(Pd)\\
Line 13: Line 14:
 |''#SBATCH --mem=4G''|Job wants 4gb on memory| |''#SBATCH --mem=4G''|Job wants 4gb on memory|
 |''#SBATCH --time=0''|Time limit on my job e.g. time=11::00 (11 hours), 0 means nolimit| |''#SBATCH --time=0''|Time limit on my job e.g. time=11::00 (11 hours), 0 means nolimit|
-|''#SBATCH --partition=Lokaverk''|Run job on this queue|+|''#SBATCH --partition=Lokaverk''|Send job to run this queue|
 |''#SBATCH <nowiki>--</nowiki>output=myBatch.log''|Log file for environment & slurm| |''#SBATCH <nowiki>--</nowiki>output=myBatch.log''|Log file for environment & slurm|
 |python3 file.py | you put your run command after the directives| |python3 file.py | you put your run command after the directives|
Line 19: Line 20:
 When you job need GPU add this line to you slurm cmd file\\ When you job need GPU add this line to you slurm cmd file\\
 #SBATCH %%--%%gpus-per-node=1\\ #SBATCH %%--%%gpus-per-node=1\\
-**Note** : We recommend all students in this 3-week-course run their GPU job for a short time, using "--time=hh:MM" slurm directive, so all jobs get some GPU time\\+**Note** : We recommend all students in this 3-week-course run their **GPU job** for a short time, using "--time=hh:MM" slurm directive, so all jobs get some GPU time\\
  
 ==Other slurm directives== ==Other slurm directives==
 #SBATCH %%--%%mem-per-cpu=2G\\ #SBATCH %%--%%mem-per-cpu=2G\\
 #SBATCH %%--%%cpus-per-task=2 : 2 cores per process/task\\ #SBATCH %%--%%cpus-per-task=2 : 2 cores per process/task\\
-#SBATCH %%--%%ntasks-per-node=4 : 4 procees per node(worker\\ +#SBATCH %%--%%ntasks-per-node=4 : 4 procees per node/worker\\ 
 +[[https://slurm.schedmd.com/sbatch.html|Slurm sbatch]]\\
 ===Tools and libraries installed=== ===Tools and libraries installed===
 CUDA toolkit version 11,7\\ CUDA toolkit version 11,7\\
compute/step2.1682331792.txt.gz · Last modified: 2024/10/14 14:24 (external edit)