Administration Computer Services Calendar Safety     Search PSFC

 

 
 

Introduction to the PSFC cluster

A low-frill introduction to the cluster and its environment

 

1.  mailing list

    everyone is subscribed to the engaging1-users@mit.edu mailing list

2.  login

    ssh -l username eofe7.mit.edu 
3.  directories

    /home/username - working space for source code, scripts,hand-edited files etc..
    /nobackup1/username - lustre parallel file system for parallel I/O
    /pool001/name - NFS file system if you need it

4.  software

    module avail

5.  scheduler

    SLURM
    1.  what jobs are running

        squeue -a

    2.  what nodes are temporarily reserved for specific users

        sinfo -T

    3.  what is running in the default short test queue
        useful for getting started and debugging

        squeue -p sched_any_quicktest

    4.  what nodes are in a particular "partition"

        sinfo -p sched_neu_cooperman

    Note nodes that have been temporarily allocated to dedicated
    projects are shown as down*.

    1.  launch an interactive session e.g. on one node with 16 cores
        and exclusive use of the node

        salloc -N 1 -n 16 -p sched_mit_hill--time=1:00:00 --exclusive

    2.  ask for a node with a GPU resource

        salloc --gres=gpu:1 -N 1 -n 16 -p sched_mit_hill --time=1:00:00 --exclusive

    3.  batch job running something

        cat <
        myjob.slurm 
        #!/bin/bash 
        #SBATCH --gres=gpu:1 
        #SBATCH -N 1 
        #SBATCH -n 16 
        #SBATCH --time=1:00:00 
        #SBATCH --exclusive 
        . /etc/profile.d/modules.sh 
        module add gcc 
        module add mvapich2/gcc 
        /cm/shared/apps/cuda55/sdk/current/1_Utilities/deviceQuery/deviceQuery 
        ! 
        sbatch myjob.slurm

 

6.  useful tutorials http://www.tchpc.tcd.ie/node/74

7.  base OS - RHEL/Centos 7

 

 

77 Massachusetts Avenue, NW16, Cambridge, MA 02139, psfc-info@mit.edu

 

massachusetts institute of technology