Center for High Performance Computing Education and Research
The Center for High Performance Computing (HPC) is the focal point for HPC research and education across an array of departments at the Naval Academy. It exposes midshipmen at the Naval Academy to the physical HPC resources available to them as future leaders in the US Navy and Marine Corps, and educates them on the theoretical underpinnings of HPC research spanning a variety of scientific disciplines. The center also enables a variety of educational opportunities through its faculty members, including research projects, internship experiences, and initiatives that build up the HPC education program at the US Naval Academy. Lastly, the Center for HPC is a group of like-minded faculty conducting state-of-the-art research in HPC and its applications to science and engineering.
The Center is funded from a variety of sources, but special acknowledgement is due to the continuing support from the DoD HPC Modernization Program.
The mission for the Center for HPC is threefold:
1. to encourage the awareness of and use of HPC technologies throughout the STEM curriculum and beyond at the US Naval Academy;
2. to foster the use of HPC technology in faculty research, and thereby:
3. to provide our midshipmen with the tools, techniques, and talents to become leaders in the DoD science and research communities as they address the Grand Challenges of computing.
Carl Albing, CompSci (co-director)
Nate Chambers, CompSci (co-director)
Gavin Taylor, CompSci (co-director)
CDR Stu Blair, MechEng
Frederick Crabbe, CompSci
Adina Crainiceanu, CompSci
CDR Scott Drayton, Aero
Daniel Finkenstadt, Physics
Evelyn Lunasin, Math
Reza Malek-Madani, Math
Susan Margulies, Math
Luke McDowell, CompSci
Kevin Mcilhany, Physics
Chris Pettit, Aero
Daniel S. Roche, CompSci
David Seal, Math
Will Traves, Math
Ryan Wilson, Physics
Richard Witt, Physics
Best Undergraduate Presentation Award: MIDN 1/C Mark Schnabel. Lattice Boltzmann Modeling of Turbine Tip Gap Leakage. American Nuclear Society Student Conference, 2016.
Tom Goldstein, Gavin Taylor, Kawika Barabin, and Kent Sayre. Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction. In Proceedings of AISTATS. 2016.
Ryan Burmeister*, Gavin Taylor, and Tom Goldstein. Neural Net Weight Initialization via Kernel Approximation. NIPS Workshop on Making Sense of Big Neural Data, 2015.
Luke K. McDowell. Relational Active Learning for Link-Based Classification. (Best Paper Award) IEEE/ACM International Conference on Data Science and Advanced Analytics (DSAA2015), October 2015.
Tom Goldstein, Gavin Taylor, Kawika Barabin*, and Kent Sayre*. Distributed Machine Learning via Transpose Reduction. NIPS Workshop on Making Sense of Big Neural Data, 2015.
Luke K. McDowell, Aaron Fleming*, and Zane Markel*. Evaluating and Extending Latent Methods for Link-Based Classification. Advances in Intelligent Systems and Computing (AISC) 346:227-256, March 2015.
Nathanael Chambers, Victor Bowen*, Ethan Genco*, Xisen Tian*, Eric Young*, Ganesh Harihara*, Eugene Yang*. Identifying Political Sentiment between Nation States with Social Media. In Proceedings of EMNLP-2015. Lisbon, Portugal. 2015.
Mohamed Khochtali*, Daniel S. Roche, and Xisen Tian*. Parallel sparse interpolation using small primes. In Proceedings of the 2015 International Workshop on Parallel Symbolic Computation. ACM, 2015.
Stu Blair, Carl Albing, Alexander Grund, and Andreas Jocksch. Accelerating an MPI Lattice Boltzmann code using OpenACC. In Proceedings of the Second Workshop on Accelerator Programming using Directives (WACCPD '15). ACM, New York, NY. 2015.
Carl Albing. Characterizing Node Orderings for Improved Performance. In Proceedings of the International Workshop in Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems at SC'15. Austin, TX. 2015.