Computer Services Calendar Safety     Search PSFC



PSFC Partition on the Engaging Cluster at MGHPCC

This page only applies to users from the PSFC lab.
Non-PSFC users with questions should email:

Information and forms regarding the PSFC Partition on the Engaging Cluster at MGHPCC

Apply for an account on the PSFC Partition (This requires your PSFC credentials)

Logging into the Engaging Cluster

Go to: You will need your MIT Kerberos credentials to access the cluster. After authenticating, you should be able to SSH to (the PSFC partition on the Engaging Cluster) with your MIT username and password.


About the Massachusetts Green High Performance Computing Center (MGHPCC)


The new PSFC computational cluster consists of a 100 compute node subsystem integrated into the “Engaging Cluster,” which is located at the Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, Massachusetts.


The PSFC subsystem is operated as part of the “Engaging Cluster” and also has access to the 2.5 Petabyte Lustre parallel file system of the Engaging Cluster.


This 100 node subsystem is connected together by a high speed, non-blocking FDR Infiniband system. This Infiniband system is capable of 14 Gb/s with a latency of 0.7 microseconds. With four channels, the effective node-to-node is 56 Gb/s or 6.4 GB/s for user applications. This network is non-blocking, thus each node has immediate access to each other node as well as to the parallel file system.


Each compute node in the subsystem is configured as follows:

Processors:   2  -  Intel E5,  Haswell-EP - 2.1GHz, 16 cores each, total 32 cores
Memory:       128 GB DDR4, the default is 4 GB per core.
Local Disk:   1.0 TB
The total subsystem is 3200 cores  with 12.8 Terabytes of memory

The individual compute nodes are very similar to the compute nodes in the “Cori – Phase 1” system at NERSC


The following snippet is from Chris Hill, MIT's representative to the MGHPCC and a founding member of the Engaging platform: Its called engaging1 (eo for short) because one of the goals is to develop more interactive and dynamic approaches to computational sciences. This is a concept some of us refer to as "engaging supercomputing" and/or computational science 2.0. The cluster consists of several head or login nodes, hundreds to thousands of compute nodes and a very large central Lustre storage system.


Thanks go to Dr. John Wright for preparing most of the information on this site regarding the PSFC partition on the Engaging Cluster.


Accessing the PSFC Engaging Cluster

A brief introduction to the PSFC Cluster

Basic Usage and Commands

PSFC Cluster FAQs

Cluster Demos

PSFC Engaging Cluster Nodes Status


For publications and reports making significant use of the PSFC partition on the engaging cluster, please add this text (or some suitable equivalent) to your acknowledgements:

"The simulations presented in this paper were performed on the MIT-PSFC partition of the Engaging cluster at the MGHPCC facility ( which was funded by DoE grant number DE-FG02-91-ER54109."



77 Massachusetts Avenue, NW16, Cambridge, MA 02139,


massachusetts institute of technology