Skip to main content Skip to footer site map
Advanced Research Computing Support

Software Supported by the ARCS Group

Cluster Application Software

Various application software packages used by USNA faculty and students have been, are being or will be installed on the new cluster. The applications include:

  • Abinit
  • Abaqus 6
  • COMSOL 4.3a Server / Batch
  • Gaussian g09 / LINDA
  • Gurobi 5
  • Mathematica / Grid Mathematica
  • MATLAB / Matlab DC Server 2011b
  • Octave
  • ROMS 564
  • R Statistics
  • TetrUSS 6.0p
  • WRF 3

These packages support parallel computing, and in general, the batch or server features of these applications are being installed on the cluster for parallel computing support, normally through some form of MPI. Some applications are installed for particular users and are not available for general use, such as Abaqus 6.

In some cases the Graphical User Interface (GUI) of the application has to be installed too in order to install the batch or server features of the application package. But, current plans are only to support running the GUI of the applications on the cluster as needed on a limited basis. The graphics capacity on the Head Nodes is currently very limited. An effort is being made to add graphics hardware to the Head Nodes.

Running application GUI’s on users’ local workstations and submitting parallel computing jobs to the application’s batch or server features on the cluster via the Intranet is promoted. Alternatively, parallel jobs for the applications can be prepared under the application’s GUI on users’ local workstations, then the job and data files of the parallel job can be transferred to the cluster to be submitted to the application’s batch or server features on the cluster.

In general, each application implements it batch or server features for parallel computing in its own special way. So be prepared to adjust to the requirements of the application and work with the support staff as needed to submit jobs to the cluster.

The individual license agreements of each software package will have to be accepted.

In some cases the server features of an application is still being integrated into the scheduling and job management software of the cluster, although the server features work outside of the scheduler and can be used on a limited basis.

Contact the cluster support staff for compilation and installation of other parallel application software packages on the cluster that you require.

Cluster Operating Systems

The primary USNA cluster is implemented with the Red Hat Enterprise Linux (RHEL) 6.5 operating system (OS), which was the latest stable version of RHEL available when the cluster was configured. The licenses for the OS include support from RHEL for the expected life of the cluster, 5 years.

RHEL, a primary commercially support Linux OS, was selected for security reasons and compatibility with a cluster management package that was selected to provide stateless operations for physics research. Also, RHEL is generally compatible with the compilers, utilities and applications that are expected to be used on the cluster.

Currently, OS utilities are used to manage a disk based (stateful) implementation of the OS on most of the cluster for general use, and a memory based (stateless) implementation of the OS on a subset of the cluster for physics research. The Cronus Head Node is used to manage the stateful side of the cluster, and the Rhea Head Node is used to manage the stateless.

X-Window support is only available on the Head Nodes. If necessary, the GNOME or KDE GUI can be accessed by users who connect to the Head Nodes with X-Window server software. Otherwise, SSH is the primary means of connecting to the Head Nodes and the Compute Nodes. VNC and browser based access to the cluster can be provided on an as needed basis.

The cluster is setup with the RHEL, EPEL and RPMFUSION Repositories, which can be used as needed to install and update various software packages. However, plans are to only apply required updates and install required software packages to avoid introducing incompatibility problem with installed user applications and utilities.

Documentation on various RHEL features is available via the Linux ‘man’ command. Additional documentation is also available at RHEL Docs.

Cluster Compilers

The new cluster currently has three different compiler packages installed. They are the C/C++ and Fortran compilers from Intel, GNU/GCC and Open64. Routinely used, related libraries and make utilities are also installed or can be installed as needed. The Intel compilers are the Cluster Toolkit Compiler Edition for Linux. The licenses include support for at least 3 years. The GNU/GCC and Open64 compilers are open source. AMD supports the Open64 compilers for its multi-core processors.

Java and various related libraries and utilities are also installed. Various other GCC compilers and interpreters are available and can be added as needed.

Plans are to also make the Portland Group CDK C/C++ and FORTRAN compilers available on the cluster per a license available for the compilers on a legacy cluster. The Intel, Portland and GNU/GCC compilers support MPI or MPI-2, as MPICH, MPICH2, OpenMPI, MVAPICH or MVAPICH2.

Documentation on compilers and interpreters is available via the Linux ‘man’ command.

Cluster Environment Modules

The Environment Modules package is used on the cluster to dynamically modify user environments for using various compilers, libraries, tools and applications. In order to use such components implemented using the package, the ‘module load’ command, along with the module file reference, must be issued at the command prompt or in a script to setup the user environment before the component can be used. To see which modules are available enter the ‘module avail’ command at the command prompt. The ‘module help’ command provides information on other module command options.

Modules files are generally used with the default shell of the cluster, which is the Bash Shell. However, module files supports all popular shells.

As a user, you will have to get use to using module files to use certain software, compilers in particular. The cluster support staff is available to support use of environment modules as needed.

More information on Environment Modules package is available at http://modules.sourceforge.net/

Cluster Utilities and Libraries

A number of special utilities and libraries that support parallel computing applications and operations have been installed on the cluster. As noted in a previous articles MPI and MPI-2 utilities and libraries are installed in the form OpenMPI, MPICH, MPICH2, MVAPICH and MVAPICH2. Intel MPI support is also included.

Other special utilities and libraries include: AMD LibM (Math Library), AMD Core Math Library (ACML), AMD String Library (LIBSST), Class Library for High Energy Physics (CLHEP), FFTW, HDF5, Intel Math Kernel Library (MKL), NetCDF, NCL-NCARG, NCO and VisIT.

Besides these special utilities and libraries, as noted in another article, many standard GNU/GCC utilities and libraries have been preinstalled for ready access. Many more are available through the software repositories that have been implemented on the cluster. Also, many other utilities and libraries that may be required, that are not in the repositories are compatible with RHEL and the compilers available on the cluster. Contact the cluster support staff for assistance with utilities and libraries as needed.

Cluster Job Management

The cluster includes the MOAB Cluster Suite from Adaptive Computing, Inc. for job scheduling and cluster workload management. The package integrates scheduling, managing, monitoring, and reporting of the cluster workloads. There are both browser and command line based options for interacting with the package. The browser based options on the cluster are still being configured. In the meantime, the command line based options should be used. They are provided by the PBS based TORQUE resource manager, which includes the familiar ‘qsub’ job submission command and the related ‘q’ and ‘pbs’ prefixed command.

When submitting batch jobs on the cluster, be sure to submit them using the ‘qsub’ command, either directly at the command prompt or within a batch submission script. The cluster support staff is available to help prepare batch submission scripts. In some cases generic submission scripts have already been prepared for a particular application.

Information on how to use the ‘qsub’ command is available by entering ‘man qsub’ at the command prompt. Of course, the ‘man’ command can be used to get information on the other related commands. Information is also available from sources on the Internet and from the cluster support staff.

go to Top