USNA | Abstracts 2005


Danica L. Adams

Midshipman First Class
United States Navy

Direction of Arrival Estimation Using a Reconfigurable Array

The goal of this project was to create a reconfigurable array that can determine the direction of arrival of a target. This goal was accomplished by using existing algorithms, in conjunction with redefining the assumed geometry of the array. These algorithms were modified to work with arrays that have the ability to move or change shape. The project investigated the effect of array rotation on the size of the data needed for the algorithm. It also examined the effect of changing the geometry from a purely linear array to an array that has two linear parts.

For demonstration purposes, ultrasonic sensors were used. Prior to implementing them, the proposed modifications to the geometry were simulated using a computer model. After the simulations were complete, the modifications were tested on the actual array. The first geometry examined with actual sensors was the linear array. The geometries investigated were those consisting of half of the array rotating such that the array formed an angle. These geometries were tested using the modifications made to the assumed geometry of the array within the algorithm. The modification of the assumed geometry allowed for different geometries to be tested.

This project correlates with research the Office of Naval Research is funding in non-conventional arrays. The results of this Trident project investigation led to further research and development of sonar arrays that may have practical applications for the Navy. An area of comparable research is that involving the resolution of ambiguities that occur when determining a direction of approach. The results obtained in this Trident project also fit into this area of research in both the Navy and the civilian world.

FACULTY ADVISORS
Associate Professor Richard T. O'Brien, Jr.
Associate Professor Kiriakos Kiriakidis
Weapons and Systems Engineering Department


 
Andrew C. Bashelor
Midshipman First Class
United States Navy

Counting Conics

Projectiles follow parabolic paths and planets move in elliptical orbits. Circles, hyperbolas, parabolas and ellipses are curves that are so abundant in nature, engineering, and art that we cannot help but notice them. Each of these curves is an example of a conic. In 1848, the mathematician Jacob Steiner posed a famous question: "How many conics are tangent to five fixed conics?" Steiner claimed to have solved the problem and he gave the answer 7776. This solution was accepted as valid for sixteen years. When the problem was revisited in 1864, the mathematician Michel Chasles realized that Steiner had miscounted the true number of conics that satisfied the conditions.

Not all conics are smooth plane curves. Singular conics are curves whose defining polynomials are reducible to the product of two linear factors. These conics can be represented as either a pair of crossed lines or a line of multiplicity two. Steiner failed to account for the degenerate conics that can be represented as a double line. He fell victim to what algebraic geometers call excess intersection.

This Trident project centered on understanding how excess intersection affects problems of enumeration involving plane conics. Research was focused on finding the solutions to twenty-one variations of Steiner's problem. These problems were solved by examining the blowup of the space of conics along the set of double lines and executing computations in what is known as the Chow Ring. These methods provide not only tangible numerical results but help to illuminate the rich underlying geometry of these fundamental problems.

FACULTY ADVISORS
Assistant Professor Amy E. Ksir
Associate Professor William N. Traves
Mathematics Department


 
Bradford L. Bonney
Midshipman First Class
United States Navy

Non-orthogonal Iris Segmentation

The goal of this Trident Scholar project was to isolate the iris, the colored part of the eye, in a non-orthogonal, digital image of the human eye. A non-orthogonal image is an image where the eye is not looking directly at the camera. Iris pattern differs significantly between individuals (including identical twins), which allows for its use as an accurate biometric identifier.

Both commercial and research iris recognition systems are becoming widespread in government and industry for logical security and access control. These iris recognition systems assume that captured iris images are normal, or orthogonal, to the sensing devices, and therefore search for circular patterns in the image. Off-angle, or non-orthogonal, images of irises cannot currently be used for identification because the iris appears elliptical; commercial algorithms cannot isolate an elliptical iris in order to start the identification process. This research expanded the functionality of iris recognition technology by developing a set of new algorithms to isolate a non-orthogonal iris in a digital image.

The algorithmic approach was to first isolate the pupil, the dark portion in the center of the eye. The pupil was isolated using bit-plane processing. The pupil appeared as a large homogenous region surrounded by insignificant noise, which allowed for easy definition of the pupil-iris boundary. Next, the limbic boundary (the outer edge of the iris) was determined in the cardinal directions, and an ellipse was calculated that incorporated those points. After all boundaries were calculated, an “iris mask” was created to identify pixels in the image that contained the iris data, the only pixels of value for the identification of an individual.

The functionality of the algorithm was tested using a database collected at the United States Naval Academy. Both orthogonal and non-orthogonal iris images were used to collect quantitative results.

FACULTY ADVISORS
Professor Delores M. Etter
Assistant Professor Robert W. Ives
Electrical Engineering Department


 
Nathan F. Brasher
Midshipman First Class
United States Navy

Trajectory and Invariant Manifold Computation for Flows in the Chesapeake Bay

The field of mathematics known as dynamical systems theory has seen great progress in recent years. A number of techniques have been developed for computation of dynamical systems structures based on a data set of a given flow, specifically Distinguished Hyperbolic Trajectories (DHTs) and their invariant manifolds.

In this project, algorithms in MATLAB® have been successfully implemented and applied to a number of test problems, as well as to the Chesapeake Bay flow data generated by the QUODDY shallow-water finite-element model. A number of interesting discoveries have been made including instabilities of convergence of the DHT algorithm and evidence of lobe dynamics in the Chesapeake.

Additionally, MATLAB® code has been developed to compute Synoptic Lagrangian Maps (SLMs). When applied to an oceanographic flow, SLMs produce plots of the time that it takes particles in various regions to encounter the coast or escape to the open ocean. Such maps are of interest to the oceanography community. A new algorithm for SLM computation has been developed resulting in orders of magnitude increase in efficiency. Previously SLM computation for a week of flow data was a problem limited to massively parallel supercomputers. With the new algorithm, similar data is computed in a few days on a single processor machine.

The development of platform-independent MATLAB® implementation of the algorithms for computation of DHTs, invariant manifolds and SLMs should prove valuable tools for studying the dynamics of complex oceanographic flows.

FACULTY ADVISORS
Professor Reza Malek-Madani
Associate Professor Gary O. Fowler
Mathematics Department


 
Sarah M. Coulthard
Midshipman First Class
United States Navy

Effects of Pulsing on Film Cooling of Gas Turbine Airfoils

The objective of this project was to determine the effects of pulsed film cooling on turbine blades. High combustor temperatures, resulting in elevated turbine inlet temperatures, produce high engine efficiency. At current operating temperatures, the turbine inlet temperature is above the melting point of the turbine blades. Thus cooling the blades in the first stages after the combustor is essential. Current methods for film cooling utilize a continuous stream of bleed air from the compressor. This air is routed into a cavity inside each blade and bled out of holes onto the blade surface, creating a film of cool air. Pulsed film cooling may reduce the amount of bleed air used, thus increasing the efficiency of the engine by allowing more air to flow through the combustor, while providing equivalent protection for the blades.

In this study, a section of a turbine blade was modeled using a plate with a row of five film cooling holes. Coolant air was pulsed via solenoid valves from a plenum, while a wind tunnel provided a mainstream flow. Temperature and velocity fields were measured over the blade surface with varying blowing rates of the coolant and frequencies of pulsing. The film cooling effectiveness, a measure of how well the coolant protects the blade surface, was calculated based on the measured temperatures. The results were compared to baseline cases with continuous blowing and no blowing. The overall best case was continuous film cooling with the jet velocity one fourth of the mainstream velocity. However, results showed that pulsed film cooling has the potential to provide an equivalent or greater film cooling effectiveness for higher jet velocities. The case of pulsed jets with a jet velocity equal to the mainstream velocity, pulsing frequency of 20 Hertz, and 75% duty cycle showed an increased film cooling effectiveness and decreased heat transfer compared to the continuous blowing case.

This study suggests that pulsed film cooling has the potential to adequately protect gas turbine blades with additional research, ultimately allowing for an increased efficiency in a gas turbine engine.

FACULTY ADVISORS
Associate Professor Ralph J. Volino
Professor Karen A. Flack
Mechanical Engineering Department


 
Michael G. Dodson
Midshipman First Class
United States Navy

An Historical and Applied Aerodynamics Study of the Wright Brothers'
Wind Tunnel Test Program and Application to Successful Manned Flight

Based on the most accurate surviving description of the Wright Brothers’ wind tunnel, a replica was constructed and used to determine the effect flow quality and experimental method had on the Brothers’ results, and whether those results were useful in a quantitative sense. The research incorporated static and total pressure measurements, velocity surveys across the jet, and quantitative flow visualization. Velocity surveys involved high resolution dynamic pressure measurements along the horizontal and vertical test section axes. Particle image velocimetry provided velocity magnitudes, turbulence intensities, and vorticity measurements in the test section. Force measurements on an airfoil model supported the conclusions regarding the effect of flow characteristics on aerodynamic measurements.

Testing revealed boundary layers extending 2.5″ from each wall. In the center of the tunnel was a 5″ diameter “dead zone” in which the flow velocity was 20% lower than the maximum tunnel velocity. Isolated pockets of high velocity flow reaching 35 mph existed outside the “dead zone”. PIV data revealed asymmetric load distributions on the airfoil due to velocity and vorticity gradients, and indicated the Wrights’ lift measurements were at least 7% low due to flow interactions with the lift balance. Direct force measurements showed the Wrights’ lift measurements were at least 6% and as much as 15% low depending on the Wrights’ true tunnel velocity. Scaling from the tunnel to the Wright Flyer increased the discrepancy by an additional 14% and showed the Wrights’ drag prediction to be 300% too high, resulting in highly inaccurate efficiency predictions. Thus, though they learned a great deal from their wind tunnel experiments, the Wrights’ quantitative data was not applicable to full scale design.

The conclusions provide insight into the birth of aviation and the men who were the first to succeed - despite limitations and deficiencies with their equipment, experience, and knowledge.

FACULTY ADVISOR
Assistant Professor David S. Miklosovic
Aerospace Engineering Department


 
Thomas W. Dunbar
Midshipman First Class
United States Navy

Artificial Potential Field Controllers for Robust Communications in a Network of Swarm Robots

An active area of research in the robotics community is “swarm control,” where many simple robots work together to execute tasks which are beyond the capability of any single robot acting alone. Yet in order for the swarm members to work together effectively they must maintain a reliable and robust wireless communication network among themselves.

The goal of this project was to create a motion control law which could fulfill the dual and sometimes conflicting requirements of executing a primary mission (e.g., search and rescue), while maintaining a robust mobile wireless communication network among the swarm members.

The success or failure in sending or receiving a wireless message is inherently probabilistic, but the odds of successfully relaying a message increase considerably based upon the spatial arrangement of the swarm members. This imposes a variety of constraints on each robot’s motion. Each robot sending a message should:

1. maintain a line of sight to the receiving robot (esp. in an environment like a cave or bunker with dense walls);
2. stay within close proximity of the receiving robot (the range is dictated by the power of the transmitter); and
3. increase the overall redundancy of the swarm by maintaining requirements 1 and 2 for two or more receiving robots simultaneously.

To this end, several artificial potential field controllers - a popular method of robotic control - have been developed in this project and simulated to determine their success in controlling the swarm. At a higher level, the project addressed the challenge of composing a motion control law to achieve the primary mission, while maintaining as many communication constraints as possible.

This project included a proof-of-concept implementation of the motion control law on real robots. In addition, this project simulated and statistically analyzed the controller to determine its effectiveness at achieving the primary mission and maintaining a robust communication network. The effectiveness of the control law was seen both in simulation and experiment. Overall the robustness of the swarm was increased 200-300% in the scenarios considered.

FACULTY ADVISOR
Assistant Professor Joel M. Esposito
Weapons and Systems Engineering Department


 
Grant I. Gillary
Midshipman First Class
United States Navy

Normal Mode Analysis of the Chesapeake Bay

The purpose of this project was to find the normal modes for a mathematical model of the Chesapeake Bay geometry. The method used, normal mode analysis, was similar to that of Eremeev et al. [1992a] and Lipphardt et al. [2000]. Normal mode analysis uses a truncated basis set of velocity fields to approximate the flow for a specific body of water. The approach taken in this project uses the three modes described by Lipphardt et al. [2000] for application to Monterey Bay with one mode corresponding to flows with streamline potentials, one mode to flows with velocity potentials and an inhomogeneous mode which takes into account forcing functions at the boundaries. In practice linear combinations of these three normal modes are used to provide a complete picture of the flows in a specific body of water from limited amounts of empirical or model data. The ability to accurately fill in partial empirical velocity fields can be used to provide the military with current data in coastal waters for mission planning or navigation. This approach is also useful for studying the spread of wet life in a body of water.

There is no analytical solution for the normal mode equations with a boundary as complicated as the Chesapeake Bay, which has 11,684 miles of shoreline but is only 189 miles long and 30 miles wide. Therefore, the normal modes have been calculated using a finite differencing method in MATLAB® alongside the finite element based program FEMLAB®. Convergence and accuracy of the solutions were first tested on the square, the circle and the equilateral triangle geometries, then the normal mode equations were solved for a representation of the Chesapeake Bay. This project has produced two useful products: the normal modes of the Chesapeake Bay and open source MATLAB® code that uses the finite difference method.

FACULTY ADVISORS
Professor Reza Malek-Madani
Mathematics Department
Assistant Professor Kevin L. McIlhany
Physics Department


 
Sean A. Jones
Midshipman First Class
United States Navy

Prediction and Improvement of Safety in Software Systems

The modern military’s ability to fight depends heavily on complex software systems, making the safety of such of software of paramount importance. The transformation of the military’s analog combat systems to computer-based systems has been plagued by software problems ranging from benign flight simulator issues to ‘smart’ ships finding themselves dead in the water. The military’s interest in increasing automation in order to reduce manpower requirements makes even trivial software safety issues a serious concern. The software engineering community is not well equipped to reduce the safety risks incurred through use of such systems, and stands to benefit from metrics, analysis tools, and techniques that address software system safety from a design perspective.

The purpose of this research project was to propose and develop tools that software engineers can use to address the issue of software safety. The project focused on safety prediction and improvement through the use of software fault trees coupled with “key nodes,” or fault tree based safety metric, and an algorithm for estimating the improvement costs necessary to achieve a targeted level of software safety. The safety prediction metric uses the key node property of fault trees while the improvement algorithm is based on the mathematical relationship between nodes in a fault tree, and yields an estimate of the man-hours necessary to improve a system to a targeted safety value based on cost functions supplied by a component’s developer. These metrics and algorithms allow designers to measure and improve the safety of software systems early in the design process, allowing for a reduction in costs and an improvement in resource allocation.

FACULTY ADVISOR
Associate Professor Donald M. Needham
Computer Science Department


 
Elizabeth R. Kealey
Midshipman First Class
United States Navy

Investigation of Elliptical Cooling Channels for a Naval Electromagnetic Railgun

The future Naval Electromagnetic Railgun will use a mega-ampere electrical current to generate an electromagnetic force which accelerates a projectile to hypersonic velocities. The applied current can raise the bulk temperature of the rails by over 100 degrees Celsius, necessitating an active cooling system for the rails to sustain high rates of fire without incurring permanent damage to the gun. The electromagnetic force on the projectile and the rails creates a complicated stress state that varies as the projectile passes along the rail, first uniaxial then biaxial compression acts on the rails.

In this study, a system of cooling channels for fluid flow down the length of the rails was considered, and channels with elliptical cross sections were examined. Elliptical shapes were considered due to the high surface area available for convection, relatively low impact on the stress distribution, and low stress concentration effect. By treating an elliptical channel as a variable area fin and varying the size and aspect ratio of the ellipse and the distance between channels, the heat transfer capability of a channel array was maximized based on given flow conditions and applied heat flux. The optimal channel design was further constrained by the applied compressive stresses. It was found that ellipses of different aspect ratios are optimal for the uniaxial and biaxial stress states, and the optimal channel design was limited by the competing effects of these two structural constraints.

To test the thermal aspect of the design, a representative set of channels were machined into one third scale copper rails using wire electrical discharge machining. Tests were performed using both a steady state heat flux to determine the overall heat transfer coefficient and transient conditions to determine the system thermal relaxation time. In order to verify the structural aspect of the design, a finite element analysis was done on the rail cross section to compare the computational stress concentration factors with the theoretical correlations used in the literature. The results of both the thermal experiments and finite element analysis were found to be in reasonable agreement with the predicted results.

FACULTY ADVISORS
Assistant Professor Andrew N. Smith
Assistant Professor Peter J. Joyce
Mechanical Engineering Department


 
Stephen S. McMath
Midshipman First Class
United States Navy

Parallel Integer Factorization Using Quadratic Forms

Factorization is important for both practical and theoretical reasons. In secure digital communication, security of the commonly used RSA public key cryptosystem depends on the difficulty of factoring large integers. In number theory, factoring is of fundamental importance.

This research has analyzed algorithms for integer factorization based on continued fractions and binary quadratic forms, focusing on runtime analysis and comparison of parallel implementations of the algorithm. In the process it proved several valuable results about continued fractions.

In 1975, Daniel Shanks used class group infrastructure to modify the Morrison-Brillhart algorithm and to develop Square Forms Factorization, but he never published his work on this algorithm or provided a proof that it works. This research began by analyzing Square Forms Factorization, formalizing and proving the premises on which the algorithm is based. First, this research analyzed the connections between continued fractions and quadratic forms, proving, among other things, that the square of any ambiguous cycle is the principal cycle. Then, the connection with ideals was developed, requiring a generalization to the standard description and formulas for multiplication of ideals. Lastly, the connection was made with lattices and minima, allowing for a generalization of the formulas relating composition with distance. These results are fundamental to explaining why Square Forms Factorization works.

This research also analyzed several variations, including two different parallel implementations, one of which was considered by Shanks and one of which is original. The results suggest that the new implementation, which utilizes composition of quadratic forms, is slower for small numbers of processors, but is more efficient asymptotically as the number of processors grows.

Shanks’ Square Forms Factorization, including a concept he called Fast Return, has been implemented in C and in Magma, and some experimental runtime analysis has been done. A parallel version in C has been implemented and tested extensively.

FACULTY ADVISORS
Professor W. David Joyner
Mathematics Department
Assistant Professor Frederick L. Crabbe, IV
Computer Science Department


 
Joshua W. Wort
Midshipman First Class
United States Navy

A Network Interface Card for a Bidirectional Wavelength Division
Multiplexed Fiber Optic Local Area Network

In this project, a network interface card (NIC) for a fiber optic local area network has been designed and simulated. In the proposed network, each NIC has two optical transmitters and receivers on it, operating at different wavelengths. This allows the implementation of various network topologies on an optical fiber utilizing dense wavelength division multiplexing and bidirectional optical add / drop multiplexers. For example, a ShuffleNet network can be implemented on a single fiber optic ring. The ShuffleNet topology minimizes the number of hops through the network for a given data packet. For an eight node network, a maximum of three hops is required between any two of the nodes.

In this network, control logic routes the data through the network by making decisions regarding transmitter selection. Because routing is performed on the NIC, the network control is more distributed than typical when using a central switch or hub. Distributed control, along with high data rate transmitters and receivers on each NIC, results in high network throughput and low latency. The predicted maximum aggregate throughput for this network is fifty gigabits per second, calculated by multiplying the number of transmitters (16) by the data rate of the transmitters (3.125 Gb/s).

FACULTY ADVISORS
Associate Professor R. Brian Jenkins
Captain Robert J. Voigt, USN
Electrical Engineering Department


 
Christopher D. Wozniak
Midshipman First Class
United States Navy

Analysis, Fabrication and Testing of a Composite Bladed Propeller
for a U.S. Naval Academy Yard Patrol (YP) Craft

The U.S. Navy, and much of the maritime industry, uses nickel-aluminum-bronze (NAB) as the primary material for propeller construction. This is done for many reasons, including its anti-biofouling characteristics, high stiffness, and low corrosion potential. However, NAB is a cathodic metal. While it experiences little corrosion itself, its presence leads to galvanic corrosion of the surrounding hull steel.

The Navy has considered the feasibility of a composite bladed propeller design, but several variables need investigation. The goal of this Trident project was to design, build and test the Navy’s first composite propeller. The detailed objectives of the research were to: evaluate a hub design; perform a structural design of a Yard Patrol (YP) craft composite bladed propeller; and finally, build and test a full-scale propeller using the composite materials.

As the general concept used composite blades attached to a NAB hub, the first step was to develop a design for the hub-blade interaction. Afterwards, the loads were predicted using computational fluid dynamics. The pressure plot was then combined with the geometry in a finite element structural analysis program to determine fiber orientation and strength characteristics. A full-scale mold plug was created using stereolithography. Finally, the carbon/epoxy blades were laid up in this mold.

The YP craft was selected as the test platform as it: 1) has two propellers (in the event of failure); and 2) is used for many hours, often in harsh conditions.

Testing included: 1) benchmarking the standard NAB propellers; 2) installing and evaluating new polyurea-encapsulated propellers developed by the Naval Surface Warfare Center-Carderock; and, 3) evaluation of the composite bladed propeller.

FACULTY ADVISOR
Associate Professor Paul H. Miller
Naval Architecture and Ocean Engineering Department

 
Trident Scholar Classes

For more news about USNA, visit us on

facebook       Flickr       YouTube       Twitter       News(Rss)