This is an old revision of the document!
Welcome
The smallvoice computing cluster at RU.is
This email compute@ru.is for access requests, inquiries etc.
To log on from a terminal : USER@smallvoice.ru.is
We use Slurm as a workload management and scheduling system, with it user have access to shared computing reources.
The login node is called freedom, the management server is atlas.
Slurm has now 5 partition (queues) and 11 worker nodes
The default partition is called doTrain, for lvl researchers.
Last autumn the 2 trial groups of students started using the cluster, the use a partition called beQuick.
An partition called bigVoice is being using for special project in LVL
The 4.th queue called cpuMem, has 3 nodes with only cpu and memory.
For this 3-week course project an new partition has been created called Lokaverk, all user in this course should send their job to this queue (partition).
Lokaverk queue has 2 GPU nodes and 2 cpu nodes
Prefered practices :
- - Good to have, if possible
- What are the steps you job needs, does one part have to finish before another can run etc
- An estimate of the how many tasks
- what resource each task needs (CPU, memory, GPU)
- Does every step require the same resources
- An estimate of how long the will run
All job must be run with slurm using “sbatch script.sh”
Please refrain from running interactive jobs on the login node
In you home directory there is an readMe and template script.sh file, which can be adjusted to each job's need
connection error?
Next