Size: 691
Comment:
|
Size: 5922
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
<<TableOfContents(1)>> | <<TableOfContents(2)>> |
Line 8: | Line 8: |
Line 10: | Line 11: |
== Example of batch script == | == Filesystems == Some but not all partitions are available to the compute nodes. Compute nodes will not be able to access any data from filesystems that are not listed here. /work /scratch /home/ == Slurm partitions == There are currently 2 partitions, '''normal''' and '''bigmem'''. The '''normal''' partition is the default partition if you submit a job without precising witch partition should be used you job will be placed in one of the normal partition. The normal partition has limited RAM of 250GB, in case you need more than that please use the '''bigmem''' partition. Use ''-p'' option can be used to specify the needed partition. == Load necessary software == By default only some software will be available when login. To be able use other software scripts you should first load then. The command '''module''' will help you to manage the modules dependencies. To to check which software are installed, can be used after importing, {{{ module list }}} Expected output is, {{{ Currently Loaded Modules: 1) autotools 2) prun/1.3 3) gnu8/8.3.0 4) openmpi3/3.1.4 5) ohpc }}} to check which software are available, ready to be used without importing anything. The same command can be used to search a specific package. {{{ module avail }}} Expected output is, {{{ --------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu8-openmpi3 ---------------------------------------------------- adios/1.13.1 hypre/2.18.1 netcdf-cxx/4.3.1 petsc/3.12.0 py2-scipy/1.2.1 scorep/6.0 trilinos/12.14.1 boost/1.71.0 imb/2018.1 netcdf-fortran/4.5.2 phdf5/1.10.5 py3-mpi4py/3.0.1 sionlib/1.7.4 dimemas/5.4.1 mfem/4.0 netcdf/4.7.1 pnetcdf/1.12.0 py3-scipy/1.2.1 slepc/3.12.0 extrae/3.7.0 mpiP/3.4.1 omb/5.6.2 ptscotch/6.0.6 scalapack/2.0.2 superlu_dist/6.1.1 fftw/3.3.8 mumps/5.2.1 opencoarrays/2.8.0 py2-mpi4py/3.0.2 scalasca/2.5 tau/2.28 -------------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu8 -------------------------------------------------------- hdf5/1.10.5 metis/5.1.0 mvapich2/2.3.2 openblas/0.3.7 pdtoolkit/3.25 py3-numpy/1.15.3 likwid/4.3.4 mpich/3.3.1 ocr/1.0.1 openmpi3/3.1.4 (L) py2-numpy/1.15.3 superlu/5.2.1 ------------------------------------------------------------- /tools/modulefiles -------------------------------------------------------------- MEGAHIT/1.2.9 ---------------------------------------------------------- /opt/ohpc/pub/modulefiles ---------------------------------------------------------- EasyBuild/3.9.4 clustershell/1.8.2 gnu7/7.3.0 llvm5/5.0.1 pmix/2.2.2 valgrind/3.15.0 autotools (L) cmake/3.15.4 gnu8/8.3.0 (L) ohpc (L) prun/1.3 (L) charliecloud/0.11 gnu/5.4.0 hwloc/2.1.0 papi/5.7.0 singularity/3.4.1 Where: L: Module is loaded Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". }}} To search for a module, {{{ module avail <<keyword>> #OR module spider <<keyword>> }}} To load a module do, {{{ module load <<MODULENAME/VERSION>> }}} Loading a module can be done following those 3 steps, 1. Locate the module, module avail 2. Check how to load it, module spider <<MODULENAME/VERSION>> 3. Load you module using the instructions from step 2 Read more about '''module''' usage https://lmod.readthedocs.io/en/latest/010_user.html == Prototype of batch script == This prototype should be in a script file, for example, my_first_script.sbatch |
Line 17: | Line 108: |
#SBATCH -p normal # Partition to use, another possible value is bigmem | |
Line 18: | Line 110: |
#SBATCH -n 16 # Total number of mpi tasks requested #SBATCH -t 01:30:00 # Run time (hh:mm:ss) - 1.5 hours |
#SBATCH -n 16 # Total number of cpu or total number of mpi tasks #SBATCH -t 01:30:00 # Run time ([d-]hh:mm:ss) - 1.5 hours |
Line 22: | Line 114: |
module load CMD | module load CMD/version |
Line 27: | Line 119: |
To run a sbatch script use {{{ sbatch <<script name>> }}} == Example of sbtach script == Let's assume a few things here, A. You need to be logged in A. Your data is available A. The needed software is available A. The test will be run in folder /work/test/ Preparing for the run, {{{#!highlight bash mkdir /work/test/ }}} Let's try to run an assembly using megahit, {{{#!highlight bash #!/bin/bash #SBATCH -J test # Job name #SBATCH -o /work/test/job.%j.out # Name of stdout output file (%j expands to jobId) #SBATCH -p normal # Partition to use, another possible value is bigmem #SBATCH -N 1 # Total number of nodes requested #SBATCH -n 16 # Total number of cpu or total number of mpi tasks #SBATCH -t 01:30:00 # Run time ([d-]hh:mm:ss) - 1.5 hours # Load your available meghit module load MEGAHIT/1.2.9 #Work directory based=/work/test tmp_dir=$based/tmp output_dir=$based/output f_read=/tools/test_data/assembly/r3_1.fa.gz r_read=/tools/test_data/assembly/r3_2.fa.gz #Creating necessary folder mkdir $output_dir $tmp_dir # Running my command megahit -1 $f_read -2 $r_read --tmp-dir $tmp_dir --out-prefix $output_dir/r3 }}} |
Contents
Submitting jobs
Whatever you read here may need to be adjusted to fit your specific case.
Do not hesitate to ask for some help wen needed.
Filesystems
Some but not all partitions are available to the compute nodes. Compute nodes will not be able to access any data from filesystems that are not listed here. /work /scratch /home/
Slurm partitions
There are currently 2 partitions, normal and bigmem.
The normal partition is the default partition if you submit a job without precising witch partition should be used you job will be placed in one of the normal partition.
The normal partition has limited RAM of 250GB, in case you need more than that please use the bigmem partition.
Use -p option can be used to specify the needed partition.
Load necessary software
By default only some software will be available when login. To be able use other software scripts you should first load then.
The command module will help you to manage the modules dependencies.
To to check which software are installed, can be used after importing,
module list
Expected output is,
Currently Loaded Modules: 1) autotools 2) prun/1.3 3) gnu8/8.3.0 4) openmpi3/3.1.4 5) ohpc
to check which software are available, ready to be used without importing anything. The same command can be used to search a specific package.
module avail
Expected output is,
--------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu8-openmpi3 ---------------------------------------------------- adios/1.13.1 hypre/2.18.1 netcdf-cxx/4.3.1 petsc/3.12.0 py2-scipy/1.2.1 scorep/6.0 trilinos/12.14.1 boost/1.71.0 imb/2018.1 netcdf-fortran/4.5.2 phdf5/1.10.5 py3-mpi4py/3.0.1 sionlib/1.7.4 dimemas/5.4.1 mfem/4.0 netcdf/4.7.1 pnetcdf/1.12.0 py3-scipy/1.2.1 slepc/3.12.0 extrae/3.7.0 mpiP/3.4.1 omb/5.6.2 ptscotch/6.0.6 scalapack/2.0.2 superlu_dist/6.1.1 fftw/3.3.8 mumps/5.2.1 opencoarrays/2.8.0 py2-mpi4py/3.0.2 scalasca/2.5 tau/2.28 -------------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu8 -------------------------------------------------------- hdf5/1.10.5 metis/5.1.0 mvapich2/2.3.2 openblas/0.3.7 pdtoolkit/3.25 py3-numpy/1.15.3 likwid/4.3.4 mpich/3.3.1 ocr/1.0.1 openmpi3/3.1.4 (L) py2-numpy/1.15.3 superlu/5.2.1 ------------------------------------------------------------- /tools/modulefiles -------------------------------------------------------------- MEGAHIT/1.2.9 ---------------------------------------------------------- /opt/ohpc/pub/modulefiles ---------------------------------------------------------- EasyBuild/3.9.4 clustershell/1.8.2 gnu7/7.3.0 llvm5/5.0.1 pmix/2.2.2 valgrind/3.15.0 autotools (L) cmake/3.15.4 gnu8/8.3.0 (L) ohpc (L) prun/1.3 (L) charliecloud/0.11 gnu/5.4.0 hwloc/2.1.0 papi/5.7.0 singularity/3.4.1 Where: L: Module is loaded Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".
To search for a module,
module avail <<keyword>> #OR module spider <<keyword>>
To load a module do,
module load <<MODULENAME/VERSION>>
Loading a module can be done following those 3 steps,
- Locate the module, module avail
Check how to load it, module spider <<MODULENAME/VERSION>>
- Load you module using the instructions from step 2
Read more about module usage https://lmod.readthedocs.io/en/latest/010_user.html
Prototype of batch script
This prototype should be in a script file, for example, my_first_script.sbatch
1 #!/bin/bash
2
3 #SBATCH -J test # Job name
4 #SBATCH -o /work/<<UID>>/job.%j.out # Name of stdout output file (%j expands to jobId)
5 #SBATCH -p normal # Partition to use, another possible value is bigmem
6 #SBATCH -N 1 # Total number of nodes requested
7 #SBATCH -n 16 # Total number of cpu or total number of mpi tasks
8 #SBATCH -t 01:30:00 # Run time ([d-]hh:mm:ss) - 1.5 hours
9
10 # Load your software/command
11 module load CMD/version
12
13 # Run your command
14 CMD [OPTIONS] ARGUMENTS
To run a sbatch script use
sbatch <<script name>>
Example of sbtach script
Let's assume a few things here,
- You need to be logged in
- Your data is available
- The needed software is available
- The test will be run in folder /work/test/
Preparing for the run,
1 mkdir /work/test/
Let's try to run an assembly using megahit,
1 #!/bin/bash
2
3 #SBATCH -J test # Job name
4 #SBATCH -o /work/test/job.%j.out # Name of stdout output file (%j expands to jobId)
5 #SBATCH -p normal # Partition to use, another possible value is bigmem
6 #SBATCH -N 1 # Total number of nodes requested
7 #SBATCH -n 16 # Total number of cpu or total number of mpi tasks
8 #SBATCH -t 01:30:00 # Run time ([d-]hh:mm:ss) - 1.5 hours
9
10 # Load your available meghit
11 module load MEGAHIT/1.2.9
12
13 #Work directory
14 based=/work/test
15 tmp_dir=$based/tmp
16 output_dir=$based/output
17 f_read=/tools/test_data/assembly/r3_1.fa.gz
18 r_read=/tools/test_data/assembly/r3_2.fa.gz
19
20 #Creating necessary folder
21 mkdir $output_dir $tmp_dir
22
23 # Running my command
24 megahit -1 $f_read -2 $r_read --tmp-dir $tmp_dir --out-prefix $output_dir/r3