Skip to main content Skip to footer site map
Advanced Research Computing Support

Main USNA Parallel Computing Cluster

The ARCS Group maintains the primary USNA distributed / parallel computing cluster for faculty and student research and advanced computing.  The cluster is located in a state-of-the-art computer room in Ward Hall.  The cluster has two (2) control computers, referred to as Head Nodes; forty-seven (47) parallel processing computers, referred to as Compute Nodes; and two (2) storage management computers, referred to as Storage Nodes.

All nodes have AMD  Opteron 6174 Processors, each with 12-cores.  The Head Nodes and Storage Nodes have 2 processors mounted on SuperMicro Dual CPU H8DGU-F Motherboards housed in 2U SuperMicro AS-2022G-URF chassis. The Compute Nodes have 4 processors mounted on SuperMicro Quad CPU H8QGi+-F Motherboards housed in 1U SuperMicro AS-1042G-TF chassis.

All nodes are connected to 3  private networks: a 1GigE Network for System Operations,  a 40Gb/s Non-Blocking Infiniband Network for computing communications, and a 1GigE Network for KVM/IPMI Node Management  / Control.   The cluster is only accessible on the USNA Intranet through its two Head Nodes.  Both Head Nodes are  in the domain.

ClusterSF manages most of the Compute Nodes for general faculty and student use, while ClusterSL manages a subset of the Compute Nodes for physics research. The cluster has 38 Compute Nodes active under the ClusterSF Head Node and 9 Compute Nodes active under the ClusterSL Head Node.  Red Hat Enterprise Linux (RHEL) 6.5 is installed on ClusterSF ( and RHEL 5.6 is installed on ClusterSL (, along with the Scyld cluster management software.  Both head nodes use MOAB/Torque for job scheduling and support various versions of MPI.

Stateful and Stateless Support

ClusterSF, one of the two Head Server Nodes and one of the two Storage Server Nodes are used to control thirty-eight (38) Compute Server Nodes as a stateful operating sub-cluster, and ClusterSL, the other Head Server Node and Storage Server Node are used to control nine (9) of the Compute Server Nodes as a stateless operating sub-cluster.   The cluster management software used on the stateless side of the cluster is Scyld Clusterware, instead of Moab Cluster Suite.

go to Top