Trident Scholar Abstracts 2018
Dakota J. Allen
Midshipman First Class
United States Navy
Evaluation of Non-Oxide Fuel for Fission-based Nuclear Reactors on Spacecraft
The goal of this project was to study the performance of atypical uranium-based fuels in a nuclear reactor capable of producing 1 megawatt of thermal power with a 15-year core life for space-based applications. Specifically, the project investigated the use of uranium-molybdenum (UMo), uranium nitride (UN), or uranium carbide (UC) and compared their performance to uranium oxide (UO 2) which is the fuel form used in the vast majority of commercial nuclear reactor applications. These alternative fuels have improved thermal conductivity and higher uranium loading density as compared to UO 2. Improved thermal conductivity of the fuel allows a design with lower peak core temperatures thereby reducing the chance of violating thermal limits of core materials. Fuel with higher uranium loading density can have more of the fissile uranium-235 isotope present per unit of volume. This allows a design with a smaller and potentially lighter core, which is a significant advantage.
The results of this study indicate that use of both UC and UN may result in significant weight savings due to higher uranium loading density. A neutron transport analysis of the core shows that UN fuel was able to maintain a negative temperature coefficient of reactivity throughout core life with attendant safety benefits. UC, however, appears to not maintain a negative temperature coefficient of reactivity and thus may require additional reactor control mechanisms to maintain stable power output and safe operation. UMo is an effective fuel for use with low fuel burnup and low power applications. This study indicates that, due to restrictive operational temperature limits and adverse response to fission product gas buildup, UMo is inappropriate for designs requiring operation at high temperature, high power, and long duration such as is specified for the design under consideration.
FACULTY ADVISORS
CDR Stuart R. Blair
Mechanical Engineering Department
Professor Martin E. Nelson (ret.)
Mechanical Engineering Department
Assistant Professor Marshall G. Millett
Mechanical Engineering Department
John J. Brough
Midshipman First Class
United States Navy
Assessment of Genetic Screening in the Military
The goal of this project was to undertake a cost-benefit analysis of genetic testing in military populations. We weighed the costs of genetic testing against the likelihood of saving lives of military recruits with undetected, potentially life-threatening genetic conditions. Large genomic databases of asymptomatic populations were used to analyze the effect that genetic screening for hypertrophic cardiomyopathy (HCM, the most common cause of sudden cardiac death) would have on the military. A database containing known pathogenic variants was used as a training set to build logistic regression models that predicted the pathogenicity of genomic variants in two genes known to cause HCM. Our cost-benefit analysis was based, in part, on the frequency of the identified pathogenic variants, as well as their likelihood of causing disease. We compared the costs and benefits of genetic screening to non-genetic physiological tests or no tests at all. We also distributed a survey to the United States Naval Academy to assess the attitudes regarding genetic screening in the military. We conclude that genetic screening with a follow-up echocardiogram for the detection of HCM is a viable and cost-effective option if a microarray genetic test is used. We find that individuals in the military view genetic testing as a viable medical test, but are concerned about the use of genetic screening to make employment decisions.
FACULTY ADVISORS
Associate Professor Daniel P. Morse
Chemistry Department
Assistant Professor Elizabeth J. McGuffey
Mathematics Department
External Collaborators
Dr. Paul S. Kruszka, M.D. Ph.D., CAPT
National Human Genome Research Institute
Ψ Carter B. Burn
Midshipman First Class
United States Navy
Human Aided Reinforcement Learning in Complex Environments
Reinforcement learning algorithms enable computer programs (agents) to learn to solve tasks through a trial-and-error process. As an agent takes actions in an environment, it receives positive and negative signals that shape its future behavior. To assist the process of learning, and to learn the task faster and more accurately, a human expert can be added to the system to guide an agent in solving the task. This project seeks to expand on current systems that combine a human expert with a reinforcement learning agent. Current systems use human input to modify the signal the agent receives from the environment, which works particularly well for reactive tasks. In more complex tasks, these systems do not work as intended. The manipulation of the environment's signal structure results in undesired and unexpected results for the agent's behavior following human training. Our systems attempt to incorporate humans in ways that do not modify the environment, but rather modify the decisions the agent makes at critical times in training. One of our solutions (Time Warp) allows the human expert to revert back several seconds in the training of the agent to provide an alternate sequence of actions for the agent to take. Another solution (Curriculum Development) allows the human expert to set up critical training points for the agent to learn. The agent then learns how to solve these necessary subskills prior to training in the entire world. Our systems seek to solve the planning requirement by employing a human expert during critical times of learning, as the expert sees fit. Our approaches to the planning requirement will allow the human expert-agent model to be expanded to more complex environments than the previous human systems developed. We hypothesize our project will increase the rate at which a reinforcement learning agent learns a solution to a specific task, and increase the quality of solutions to problems that require planning into the future, while successfully employing the use of a human teacher that guides the agents.
FACULTY ADVISOR
Professor Frederick Crabbe
Computer Science Department
External Collaborator
Associate Professor Rebecca Hwa
University of Pittsburgh
Christopher G. Cantillo
Midshipman First Class
United States Navy
In an effort to better understand at-sea helicopter operations and improve flight envelope definition, this project analyzed the effects of transient wind gusts on a representative ship airwake. Existing meteorological data were combined with a Simple Frigate Shape 2 model undergoing translational oscillations in order to simulate full scale, sweeping gusts in a closed-circuit wind tunnel. The experiment modeled a spectrum of full-scale gust durations and wind direction oscillations ranging from 4 to 14 degrees. When scaled to testing conditions for a 1:120 scale model, the test matrix included six separate data points at gust-frequency referenced Strouhal numbers ranging from 0.430 to 1.474. A 725-Hertz time-resolved Particle Image Velocimetry system was then used to acquire flow field data across a series of three horizontal planes spanning from 0.25 to 1.5 times the ship hangar height. All testing was conducted at a steady-state, ship hangar height-referenced Reynolds number of 26,000 which correlated to a tunnel velocity of approximately 25 feet per second. Dynamic results were compared with static results, and the differences in turbulent structures and wake velocities were quantified. For the transient gust cases, the most notable difference included an extended lee attached to the windward hangar edge. Upon further analysis this separation bubble was determined to be the result of a transient gust-induced dynamic stall causing a measurable change in turbulence over the flight deck.
Associate Professor David S. Miklosovic
Midshipman First Class
United States Navy
The Maritime Continent (MC) is a region particularly susceptible to enhanced eastward-moving convective (thunderstorm) activity during the Madden-Julian Oscillation (MJO) active phase. To aid MJO predictability over the MC, atmospheric conditions across two vertical levels of the atmosphere were explored in this study: (a) humidity and height in the troposphere, and (b) wind in the stratosphere. In both, the Wheeler-Hendon Real-Time Multivariate MJO (RMM) Index was used to categorize MJO events over the MC from 1980-2017 based on their strength entering and exiting the region. An empirical orthogonal function analysis was developed to identify phases of the stratospheric Quasi-Biennial Oscillation (QBO) by direction and altitude of zonal wind centers. In the troposphere, positive specific humidity anomalies within the MJO active envelope and a near-surface “moisture foot” region in the lower troposphere east of the active envelope favor MJO propagation. In the stratosphere, east-west winds during active events can indicate the likelihood of intense, eastward-moving MJO thunderstorm intensity over the MC. These MJO events over the MC are likely to remain strong when easterly (westerly) mid-stratospheric QBO wind anomalies develop during boreal winter (spring and summer). When easterly mid-stratospheric wind anomalies are present, stratospheric temperature anomalies in thermal wind balance with the zonal wind anomalies decrease the upper-tropospheric and lower-stratospheric stability, aiding deep convection and favoring a stronger MJO. Mechanisms which explain the seasonality of this relationship are suggested areas for future research, as well as mechanisms which account for the boreal spring and summer QBO-MJO relationship.
FACULTY ADVISORS
Professor Bradford S. Barrett
Oceanography Department
CAPT Elizabeth R. Sanabia, USN
Oceanography Department
External Collaborator
Assistant Professor Pallav Ray
Florida Institute of Technology
Benjamin R. Dunphy
Midshipman First Class
United States Navy
Magnetotransport Properties of Shallow Quantum Well Structures for Spintronic Applications
The electron is a fundamental particle that carries both an electric charge and an intrinsic angular momentum (spin). Recently, interest in employing the electron’s spin triggered a wave of research in spin-dependent electron transport in various materials (spintronics). Spintronic devices manipulate, generate, and detect spin currents rather than the charge currents found in electronic devices. This requires the materials employed to have large spin-orbit coupling and excellent semiconducting properties.
This project focused on n-doped InAlSb/InAs/AlGaSb heterostructures which show promise in spintronic applications. These materials have an InAs quantum well near the surface for easy injection and detection of current and have been optimized to have high electron mobility at room temperature. This project investigated the effect of the well surface proximity on the semiconducting and the spin-orbit properties of these materials to establish their potential for spintronic applications. To this end, we measured and evaluated competing electron transport mechanisms present in these shallow wells and responsible for the observed behaviors.
The sample heterostructures were fabricated in Hall bars and their sheet and Hall resistance in variable magnetic field, temperature, and illumination conditions, with wavelengths of 400nm up to 1300nm were measured. We then analyzed the Shubnikov-de Haas oscillations and the Hall effect to extract the semiconducting and spin-orbit properties of these structures. The results are compared and contrasted with the properties of a deep-channel InAs sample with excellent spinorbit coupling properties. We find that the carrier concentration of the shallow channels increases under infrared illumination, but decreases when further decreasing the wavelength. We also find a general increase in the effective mass of the samples with the carrier concentration. Lastly, we find that the shallow-well samples do not exhibit observable spin orbit coupling despite excellent semiconducting properties. We conclude that small quantum scattering times make resolving the spin-split populations difficult.
FACULTY ADVISOR
Associate Professor Elena Cimpoiasu
Physics Department
External Collaborator
Dr. Brian Bennett
Office of Naval Research
Midshipman First Class
United States Navy
This paper presents an autonomous multivehicle control algorithm capable of persistently searching and tracking targets in a defined search area subject to operational endurance constraints of individual agents. A small-scale system serves as proof of concept for larger systems that are employed in operational environments. The underlying goal is to design a modular control architecture that can be modified to any type of autonomous vehicle, search area, or target. In practical application, a target can be anything from heat signatures to radioactive material; therefore, this project will simulate a generic emitter-detector pair as a placeholder relationship for real world applications. The control strategy accounts for the appearance, motion, and disappearance of multiple targets in the search space constituting the utility of creating a team of multiple search agents. When agent battery level drops below a predetermined threshold, the agent returns to a base station to recharge and be relaunched into the mission. Remaining agents must account for this loss and gain of other team members as they exit the search environment.
The contributions of this work are 1) the design of search trajectories for autonomous vehicles with limited endurance, 2) incorporation of return-to-base and recharge time requirements, and 3) coordination of multiple vehicles by developing a decision-making model to and assign agents to operational modes. Each of these components enable persistent multivehicle operations. Simulation results are intended for implementation on a system of quadrotors complemented by a system capable of autonomously recharging vehicles to sustain a multivehicle team beyond the mission life of a single vehicle.
FACULTY ADVISORS
Assistant Professor Levi D. DeVries
Weapons and Systems Engineering Department
Assistant Professor Michael D. Kutzer
Weapons and Systems Engineering Department
Gregory E. Hyer
Midshipman First Class
United States Navy
A Microlensing Analysis of the Central Engine in the Lensed Quasar WFI J2033-4723
We measured the size of the accretion disk in the gravitationally lensed quasar WFI J2033-4723 by the analysis of 13 seasons of optical imagery. Using point spread function (PSF) modeling software, we measured the brightness of each of this system’s four images in 7 seasons of optical monitoring data taken at the 1.3m SMARTS telescope at Cerro Tololo, Chile and in 6 seasons of optical monitoring data from the 1.5m EULER telescope in La Silla, Chile. We combined these new data with published measurements from Vuissoz et al. (2008) to create a 13-season set of optical light curves. Employing the Bayesian Monte Carlo microlensing analysis technique of Kochanek (2004), we analyzed these light curves to yield the first-ever measurement of the size of this quasar’s accretion disk log{(rs/cm)[cos(i)/0.5] 1/2} = 15.86 +0.25 −0.27 at the rest frame center of the R-band λ rest = 247 nm. Despite the fact that we now know of ~ 106 lensed quasars, the size of the central engine has been measured in only 14 of these systems.
FACULTY ADVISORS
Associate Professor Christopher W. Morgan
Physics Department
Associate Professor Jeffrey A. Larsen
Physics Department
Carl C. Kolon
Midshipman First Class
United States Navy
Stability of Nonlinear Swarms on Flat and Curved Surfaces
Swarming is a near-universal phenomenon in nature. Many mathematical models of swarms exist, both to model natural processes and to control robotic agents. We study a swarm of agents with spring-like attraction and nonlinear self-propulsion. Swarms of this type have been studied numerically, but to our knowledge, no proofs of stability yet exist. We are motivated by a desire to understand the system from a mathematical point of view. Previous numerical experiments have shown that the system either converges to a rotating circular limit cycle with a fixed center of mass, or the agents clump together and move along a straight line. We show that this is not always the case, and the behavior is sometimes more nuanced. Our specific goal is to investigate stability of the system’s circular rotating state. The system is translation-invariant, and when the center of mass comes to a halt, the agents decouple from each other.
We apply methods from the stability theory of dynamical systems, including Liénard’s Theorem, Lasalle’s Invariance Principle, and Lyapunov’s direct and indirect methods, to globally characterize the behavior of these decoupled systems, and to locally characterize the desired behavior of the entire swarm. We confirm our theoretical findings with numerical experiments. So far, swarm models like this have only been studied in Euclidean, or flat, space. We extend a class of swarm models to curved geometries, or Riemannian Manifolds, using concepts from differential geometry. Through numerical simulation, we find that their behavior mimics the behavior of the same swarms in flat space. We use this in two ways. We modify swarms to fit on embedded surfaces, creating swarms which move on spheres and in hyperbolic space. We also use it to modify the limit cycles of swarms in flat space into desired shapes, like ovals.
Finally, we use Gazebo, a high-fidelity robotics simulator, to simulate a robotic swarm following the same swarm model. We show that robotic agents display the behavior which we predict mathematically, indicating that it is feasible to control robot swarms using this method.
FACULTY ADVISOR
Assistant Professor Kostya Medynets
Mathematics Department
David J. Liedkta
Midshipman First Class
United States Navy
Prediction of Regional Voting Outcomes Using Heterogeneous Collective Regression
Increasingly, many important domains in the world can be viewed as networks of linked nodes: people connected by social network “friendships,” webpages connected by hyperlinks, and even geo-political areas connected by proximity and common interests. To leverage these links for prediction and analysis tasks, Machine Learning researchers have developed multiple techniques for link-based classification (LBC). While LBC can substantially improve prediction accuracy in some domains, current limitations greatly restrict its applicability when used to evaluate heterogeneous domains (e.g., when the collection of “nodes” under study are actually drawn from multiple populations). Additionally, traditional LBC predicts only categorical outputs, while link-based regression and the prediction of continuous outputs have been left largely unexplored.
One such application that requires continuous outputs involves elections. Predicting the voting outcome of national or regional elections is a challenging yet important problem, and has great implications for regional and international security. As just one example, how well can the national outcome of an election be predicted, given past voting history and some incomplete “day of voting” results? A recent study by Etter et al., using Swiss referendum outcomes, reported high accuracy, even when only 5% of voting \regions" had reported results. This study used a collaborative filtering approach to implicitly leverage the correlation present between “nearby” regions. They did not, however, consider formulating the regions as a network.
This project presents the first extension of LBC algorithms to multiple predictive “models” and continuous outputs (thus yielding heterogeneous collective regression, HCR). To demonstrate the effectiveness of this approach, we apply it to the voting outcome prediction task evaluated by Etter et al.
Adapting LBC to HCR, in a form suitable for the voting prediction task, involved a number of algorithmic decisions, and two primary challenges. First, the existing data had to be converted into a suitable network, since “links” are not inherently present. While prior work has proposed some methods for similar tasks, based on node similarity or proximity, we demonstrate that link construction based on correlation history yields superior results. Second, since existing LBC methods only support the prediction of categorical outputs, we had to create new methods for relational feature construction to facilitate prediction that instead produces continuous outputs. We show that simple feature strategies can enable link-based regression to improve accuracy. However, we also propose novel alternatives based on \weighted vector" approaches and continuous extensions of existing Bayesian probabilistic reasoning, and demonstrate that they can yield even better accuracy. Overall, we demonstrate that, for the voting prediction task, HCR can be highly effective, robust to multiple choices of regression parameters and linking strategies, and computationally practical. This success opens the door to the application of HCR to other analysis tasks for link-based data.
FACULTY ADVISOR
Professor Luke K. McDowell
Computer Science Department
Fernando R. Vale-Enriquez
Midshipman First Class
United States Navy
Creating a Fast SMT Solver for the Theory of Real Non-Linear Constraints
The satisfiability problem asks whether some given logical formula, for any valid combination of input, can ever evaluate to true. Satisfiability Modulo Theory (SMT) Solvers are specialized software tools which answer the satisfiability problem for a formula in some theory, such as the theory of real numbers. SMT Solvers are the core of many important tools and fields, such as automated theorem proving, hybrid systems design, formal verification of software, and security design tools. Tools which use SMT Solvers have been shown to be highly effective, but the difficulty of satisfiability solving in certain relevant theories limits the usefulness of SMT Solvers in industry.
SMT Solvers use a model which incorporates two different pieces of software, a satisfiability solver and a theory solver. A solver is used to do solving on the logical structure of a problem, and generates a list of logical assumptions which satisfies the structure of the problem. The SMT solver translates these assumptions into facts in the theory, which is checked for satisfiability by the theory solver. The efficiency of theory solvers is typically the limiting factor in the speed of SMT Solvers. One poorly phrased query can take a theory solver hours to solve.
To improve the efficiency of SMT Solvers, we propose a new model of SMT Solver which incorporates a partial theory solver. The new partial theory solver solves many, but not all problems very quickly. In fact, it is guaranteed to run in polynomial time in all cases. This limits the time which a slower yet fully-fledged must be applied; the partial theory solver solves in fractions of a second what may take a full theory solver a significant amount of time. We developed our theory solver by augmenting algorithms discussed in the paper Black-box/white-box simplification and applications to quantifier elimination by Christopher W. Brown and Adam W. Strzebonski. We modified these algorithms so that they could be used in the context of SMT Solving. This process required modifying the algorithms so that they output a reason for every deduction they made. Finally, we implemented these algorithms into a theory solver, and developed a full SMT solver which incorporates the partial theory solver.
FACULTY ADVISOR
Professor Christopher W. Brown
Computer Science Department
Michael D. Walker
Midshipman First Class
United States Navy
A Partially Premixed Combustion Application for Power Improvement in Military Diesel Engines
Due to increasing weight in military platforms, engine power needs to be increased in order to maintain performance. Diesel engine power is limited by soot formation, which is an indicator of incomplete fuel combustion due to lack of oxygen and poor mixing of the fuel and air. Once the soot limit is reached in a conventional diesel engine, further fuel increases will not result in more engine power since both the time for combustion (i.e. engine RPM) and oxygen are limited. An alternative approach is needed to both deliver and convert fuel energy in a diesel engine’s combustion chamber.
Partially Premixed Combustion (PPC) allows for better mixing of the air and fuel in the combustion chamber, leading to lower combustion temperatures and higher flame speed (shorter burn duration) as compared to conventional diesel combustion. PPC delivers additional fuel to the combustion chamber in internal combustion engines through the air intake system in addition to the in-cylinder (i.e. combustion chamber) injection event, allowing for increased power opportunities. Similar to the four-stroke Diesel cycle (conventional compression ignition, CI), PPC port injects fuel during the intake stroke and at the top of the compression stroke. With a PPC fueling approach, by the end of the intake stroke, some fuel (a minority fraction up to one-half of the total injection amount) and air have been mixed in the engine’s cylinder. Then a fuel injection event at the top of the compression stroke auto ignites, starting the overall combustion event, which then forces the piston down for the power stroke. The premixing improves the combustion stoichiometry, but this small amount of fuel is not enough to cause early ignition, so that the fuel ignites during the main injection event. Thus, the main in-cylinder injection event controls combustion timing. The lower temperature and improved flame region (air-to-fuel ratios) lead to lower soot and NOx (oxides of nitrogen) in the exhaust in comparison to standard diesel combustion.
This project will improve the specific power gains in three distinct engines by retrofitting each with port injection to achieve PPC. This project fundamentally characterizes achievable power gains in a flexible Waukesha Diesel CFR research engine that allows for the manipulation of combustion phasing-timing, compression ratio (CR), and maximum baseline load. The conditions to achieve optimal combustion phasing will be determined. Fuels evaluated include conventional Navy JP-5 and less reactive, non-JP-5 fuels via port injection (potentially leading to increased pre-mixing with further power gains). In essence, this study sought to explore whether or not a two-fuel PPC approach might be worth the additional fuel complexity when compared to a conventional diesel approach or a single-fuel PPC approach, based on power improvements from the high load extension of the exhaust sooting limit. Based on these results, PPC was then applied to a small Navy diesel generator, and Marine Corps special operations Humvee engine in order to quantify actual practical power gains, using both single and dual fuel approaches.
In the Waukesha CFR engine, it was seen that power levels were then able to increase from -2 to 27% (at CR 21.5) over conventional diesel combustion without a soot opacity penalty. In the Yanmar L100V6 engine-generator, power levels of 9.3 kW to 11.3 kW were achieved compared to 8.5 kW at conventional operation without a soot penalty. In the Humvee engine, power improvements of 7% and 8% were shown with JP-5 and iso-octane. Early heat release behavior was seen with both JP-5 and iso-octane, leading to longer burn durations and less soot-reduction benefit than expected.
FACULTY ADVISORS
Professor Jim S. Cowart
Mechanical Engineering Department
CAPT Leonard J. Hamilton (ret.), USN
Mechanical Engineering Department
Michael J. Wallace
Midshipman First Class
United States Navy
Innovations to Increase the Power of State-of-the-Art Graph-Theoretic Two-Sample Statistical Tests
One of the classic problems in statistics is to determine whether a group of observations can be characterized as statistically different from some other group. In the case of the well-known two-sample t-test, observations are univariate (1-dimensional) and underlying probability distributions are normal (or approximately normal). However, in real-world problems, the number of covariates may be very large and there may be little known about underlying distributions. Finding powerful tests for group differences in this general multivariate case presents challenges, and this difficult case has attracted recent research interest.
In the setting of graph-theoretic approaches, the first consequential two-sample test was introduced by Friedman and Rafsky (FR1979) as a multivariate generalization of the Wald-Wolfowitz runs test. The rationale of this test and newer, similar tests is that, if two samples are from different distributions, observations would be preferentially closer to others from the same sample than those from the other sample.
This project explores the tradeoffs between graph density, test power, and computational costs in a variety of scenarios and recommends guidelines for edge-counting criteria. The benefits and drawbacks of using denser subgraphs are analyzed to extend recent findings in statistical literature. A power simulation study is used to examine state-of-the-art tests in competition under the same conditions and compare performance. A novel exploratory approach is then introduced that enables finding group differences at lower computational costs.
Next, the efficacy of a newly-proposed dissimilarity measure for mixed data, “treeClust”, is investigated using real-world medium-sized and large-sized data sets.
Finally, we introduce a new test that involves ranking all of the edges with respect to weight instead of selecting a subset of edges based on some other more time-consuming optimality criterion, as is done in other such tests. This Cumulative Cross-Count (CCC) test is a competitively powerful, user-friendly, nonparametric, multivariate, multi-group test. We derive moment information and employ permutation approaches to approximate p-values.
FACULTY ADVISOR
CAPT David M. Ruth, USN
Mathematics Department
