Computational Resources

Most of our work is carried out by supercomputing clusters managed and/or maintained by SUNCAT members.  Here we detail which resources are available, and basic policies for each.  To get started using any of these resources, contact Edward Truong (edtruong@stanford.edu) to set up an account.


"SLAC"

Our oldest and (currently) largest system is hosted at SLAC National Accelerator laboratory, and is often referred to by group members as "SLAC."  SUNCAT members have access to several different queues based on the demands of their calculations:

Computing resources hosted at SLAC
   Queue Name     Processor Type     Cores/GPUs per node     Nodes Available     Memory per core (or GPU)     Interconnect     Cost Factor  
  suncat-test     Nehalem X5550     8*   2   3 GB   1 Gbit Ethernet     0.0  
  suncat     Nehalem X5550     8*   284   3 GB   1 Gbit Ethernet     1.0  
  suncat2     Westmere X5650     12*   46   4 GB   2 Gbit Ethernet     1.1  
  suncat3     Sandy Bridge E5-2670     16*   64   4 GB   40 Gbit QDR Infiniband     1.8  
  suncat-gpu     Nvidia M2090     7   17   6 GB   40 Gbit QDR Infiniband     0.0  

* These numbers represent the number of logical cores in a system that utilizes hyper-threading.  Certain codes may use this technology more efficiently than others.


"Sherlock"

We now additionally have computer resources hosted at Stanford University hosted in the cluster codenamed Sherlock.  Please ntoe that not all Sherlock users are SUNCAT affiliates, so you may see unfamiliar usernames.  As Sherlock is relatively new, it's also relatively homogeneous, though different use cases are detailed below (and on their own wiki):

  Node Type   Processor / GPU   Cores/GPUs per node   Memory per CPU (or GPU)   Notes
  CPU / Default   Dual-Socket Xeon E5-2650   16   4 GB   Will be used by default
  GPU

  NVIDIA Tesla K20Xm (or)

  NVIDIA GeForce GTX Titan Black

  8   32 GB   Will be used by default
  Big Data/Increased RAM   Quad-Socket Xeon E5-4640   21   48 GB

  Use --qos='bigmem' and

  --partition='bigmem'

As detailed below, counting available nodes is complicated by the variety of partitions available to SUNCAT users.

Partitions

Please note that there is no "test" queue on Sherlock.  To check syntax, and for general python scripting you should enter "sdev" to move from the login node to a production node.

All Sherlock users (including SUNCAT members) are able to submit jobs to the "normal" partition.  SUNCAT users are also able to submit to the "iric" partition (which is shared with several other research groups).  Most SUNCAT members will notice no effective difference between these partitions, and should generally submit to both.

Additionally, SUNCAT users may submit to the "owners" partition.  These are nodes purchased by other groups that would otherwise be left idle.  You are free to use them, but your job may be cancelled if that resource is requested by its proper owner.  However, there are also generally several nodes available in this partition at any time.  Therefore, it is best used for jobs that are either expected to be short, or are equipped to restart efficiently.

Other specialty partitions exist (gpu and bigmem), and are detailed on the Sherlock wiki.

Running on GPUs

At this time there are no quantum chemical software packages configured for GPU use on Sherlock.  However, if you are using another script that would benefit specifically from GPU acceleration you can follow the instructions here.