Skip to main content Skip to footer site map
Trident Scholar Program
Trident Scholars 2002

Trident Scholar Abstracts 2002

Steven R. Burns

Midshipman First Class
United States Navy

Steering Control Compensation of Accelerating Vehicle Motion

The Automated Highway System (AHS), proposes to control each vehicle on the road, and will be the personal transportation mode of the future. This system must be implemented gradually, because the technology involved requires driver trust and confidence. Control subsystems, like the one proposed by this project, bring AHS one-step closer to realization by fostering driver faith in computer controlled vehicle subsystems.

Today's vehicles offer no dedicated system to assist the driver in maintaining control of the vehicle during emergency braking situations where maneuvering the car is necessary. The lack of sufficient friction between the tires and the road available to turn the vehicle during braking leads to a slow turning response that is inadequate in emergency situations. Additionally, the change in weight distribution caused by braking has a destabilizing effect on vehicle dynamics. Both of these conditions make controlling the vehicle during the emergency turning/braking maneuver difficult without computer assistance. Reluctance on the part of the automotive industry to develop control systems, like the one in this project, is a result of the high costs and large spaces required for full size vehicle testing.

This project will attempt to find a solution to the braking/turning problem using a vehicle model test bed whose design was previously developed, and a sophisticated, scaled down vehicle. The results from scale model research can be scaled to the actual vehicle size using known scaling techniques. The use of scale models for real world testing dramatically reduces the cost of research and therefore increases the number of institutions that can conduct research in the automotive field.

The final goal of the project is to reduce the time required for the vehicle to execute an emergency maneuver where the brakes and steering control are engaged. The vehicle test bed will allow vehicle testing to take place on a greatly reduced scale and budget. A mathematical model of the dynamics of a full sized car will be developed simultaneously and the two models, test bed and mathematical, will be used to develop one another to completion. Once both models are completed they will be used to develop and test the design of a control system that solves the design problem.

The control system will be designed using advanced mathematical techniques of Hinfinity and the concept of gain scheduling allowing the control system to optimally respond to each situation despite significant changes in the operating conditions. The result is a robust system capable of helping the driver avoid an accident. The goal of this project is not to take control away from the driver but only to optimize the vehicle handling to allow even the best driver greater control over the vehicle when the system is engaged, compared to the same car without the system.

The system designed in this project will significantly lower the settling time of the emergency braking/turning maneuver, decreasing the likelihood of accident and increasing driver control over the vehicle, by controlling the aspects of the emergency braking/turning maneuver that are beyond the driver control, and normally go uncontrolled.

Assistant Professor Richard T. O'Brien, Jr.
Assistant Professor Jenelle L. Piepmeier
Weapons & Systems Engineering Department

Daniel F. Chiafair
Midshipman First Class
United States Navy

Stability Analysis of a Nonlinear System Stabilizing Controller for an Integrated Power System

This research focuses on the derivation of highly survivable control algorithms for the U. S. Navy's DD-21 Integrated Power System. Incorporation of active, survivable control algorithms into an integrated, solid state power distribution system will maintain power continuity during major, combat induced casualties.

Today’s warships are constructed with segregated mechanical propulsion and electric power systems. In current ships, the power dedicated to ship propulsion is about 90% of total ship power and the power dedicated to electrical generation is about 10%. The existing ship service electrical system is not very robust. Because the system is very tightly coupled, a single casualty can disrupt the entire system, causing a total loss of electrical power. Even though the mechanical propulsion power system may be in perfect working condition, it cannot provide any electrical power to the ship. A fundamentally new system, the Integrated Power System (IPS), will be deployed on DD-21 and future naval combatants to combat resolve this problem. The key advantage of an electronically controlled, Integrated Power System is the ability to actively control the flow of power throughout distribution systems. The Integrated Power System requires sophisticated control algorithms and automation infrastructure to maintain power and, if necessary, re-route power to critical systems. Currently, only preliminary control algorithms exist to control the Integrated Power System on DD-21. More robust algorithms may be able to significantly improve the overall system survivability.

This research will use existing Integrated Power System mathematical models and computer simulations developed by Energy Systems Analysis Consortium (ESAC) under the direction of the Navy. The model will be refined to meet the needs of this project. A distributed and hierarchical architecture will be selected as the strategy for the control system. Current algorithms and their limitations will be investigated and a quantitative assessment baseline will be established. New, more survivable control algorithms will be derived and then tested via computer simulation. A quantitative comparison of the baseline and the new algorithms will be used to determine where improvements have been made. It is anticipated these algorithms will be able to withstand major losses to the control system and still provide continuity of service to the ship.

Assistant Professor Edwin L. Zivi
Weapons & Systems Engineering Department

Joshua B. Datko
Midshipman First Class
United States Navy

Supporting Secure, Ad Hoc Joins for Tactical Networks

The information age is driving the Network Centric Warfare (NCW) concept, causing the Navy to re-evaluate its existing methods of command and control and consider a more distributed approach. Only recently has the Defense Advanced Research Project Agency (DARPA) started their Tactical Targeting Network Technology (TTNT) initiative, which implements the NCW concept by equipping the warfighter with heightened awareness and greater decision making ability. The purpose of this research is to design, develop, and test several protocols needed to support such a Tactical Targeting Network (TTN).

The current NCW concept fails to disclose the full potential of the network to the warfighter. This shortcoming inhibits the TTNT from providing a low-latency, high bandwidth, and dynamically re-configurable network infrastructure. The realization of TTNT also hinges on many theoretical problems. One such consideration, which is the focus of this research, is to develop an algorithm by which users can join and be authenticated on an ad hoc network.

Existing algorithms for joining ad hoc networks do not provide the functionality needed in TTNT. For example, the underlying algorithms in Sun Microsystems’ Jini Technology enable a network-centric framework, however it lacks the necessary authentication procedures and a non-hierarchical infrastructure. Likewise, the developing Bluetooth ad hoc technology implements an insecure authentication-joining algorithm. Both of these existing solutions are incompatible with TTNT due their insufficient attention to securing the network.

The initial portion of this research entails creating a network-joining algorithm that allows one user to join a pre-existing network. The algorithm is a procedure to determine whether or not a user can join the network. The algorithm will grant authentication based on a joining entity’s "trust characteristics." These characteristics include such elements as a user’s digital certificate, visual identification, and joining rationale. This joining algorithm incorporates a networking paradigm that is essential to efficient ad hoc and distributed operations and does not assume that all users are explicitly trusted. Once the algorithm has been derived, it will then be implemented and modeled through Jini networking technology. The initial experiments will be conducted on a single machine local server, with experiments conducted over a wireless local area network.

Empirical data will be collected in the form of algorithmic running times. Using different scenarios, the algorithm’s execution time will be measured. As more complex scenarios are developed, the growth in execution times will be compared to the algorithm’s predicted values. The results of these simulated scenarios will aid in understanding how the algorithm performs under realistic conditions, like high network delay. Once the experimental data have been analyzed, the algorithm will be re-evaluated in terms of functionality and efficiency.

This research is expected to contribute to the development of secure, ad hoc network joining algorithms. The algorithm developed for this research can be expected to advance methods of network discovery and joining, as well as network-centric authentication schemes. These methods will improve upon existing Jini and Bluetooth authentication models. Finally, this research will contribute to tactical war fighting technology by developing an algorithm to access a secure, ad hoc infrastructure that will increase future warfighters’ battlespace awareness and knowledge.

Assistant Professor Margaret M. McMahon
Associate Professor Donald M. Needham
Computer Science Department

John D. Dirk
Midshipman First Class
United States Navy

Electronic Reliability and the Environmental Thermal Neutron Flux

Modern microelectronics are characterized by microchips with high bit densities. When boron is used in their manufacturing process they tend to be highly sensitive to low energy neutrons (less than 1 eV), usually called thermal neutrons. When background thermal neutrons originating from cosmic rays interact with the nucleus of certain atoms, ionizing radiation is produced that can change the logic state of a cell on the microchip. For example, in a memory cell the electrons released by the ionization could cause a zero to become a one, an occurrence known as a "Bit Flip." This phenomenon is also known as a Single Event Upset and is an important problem facing computer manufacturers who employ boron-based materials in their microelectronics.

The goal of this project is to characterize the thermal neutron flux and to better understand how it depends on local environmental variation. These outcomes will be achieved through modeling, field measurements and building material characterization. The project is being conducted under the auspices of an IBM-Naval Academy research agreement.

The detection system is based on He3 gas-proportional counters. Two detection systems will be employed at each field location. Both are based on the He3 probes. One system is driven by commercially available, programmable survey meters built by Ludlum Instruments. The other uses a pulse height detection system developed by Canberra Industries. In each system one detector is shrouded with silicon-based, boron-impregnated rubber that essentially shields it from thermal neutrons (approx 240 attenuation factor). Faster neutrons still penetrate the detector and are counted. By subtracting the counts from the shielded and unshielded probes the thermal flux can be deduced. The two different methods of data collection (Ludlum and Canberra) will be calibrated at the National Institute of Standards and Testing (NIST) and should yield the same flux within error margins.

Field measurements will be made at a variety of locations. Local building materials, latitude, longitude, altitude and the proximity of water will all be considerations noted. Measurements will be taken in different buildings in and around Annapolis and on Mt. Washington, NH and in Leadville, CO.

Experiments investigating common building materials will be conducted by irradiating samples with a 14MeV neutron generator the Naval Academy. These experiments will look for first order effects that certain materials may contribute. Additional experiments may be conducted at the National Institute of Standards and Testing in Gaithersburg, Maryland.

Modeling is being conducted using the Monte Carlo N-particle Transport Code (MCNPX) developed by the Department of Energy for nuclear shielding applications. The user inputs information on geometry, material composition, the source, and detectors. The program iteratively calculates single particle histories based on nuclear models and data tables. The course of each particle series is driven by a pseudo-random number set. MCNP compiles statistics, based on hundreds of thousands of histories, to predict particle flux at given locations. This modeling will allow dependencies on local water and building characteristics to be considered in a controlled, theoretical environment. This modeling should help identify dependencies for which the field measurements are looking.

The thermal neutron flux is a quantity of interest to microelectronics producers today. A thorough survey of the thermal neutron flux has, however, yet to be accomplished. This project attempts to correlate observed variations in the slow neutron flux with environmental conditions and thereby to provide a more robust understanding of the cosmic ray induced thermal neutron environment.

Professor Martin E. Nelson
Mechanical Engineering Department
Visiting Professor James F. Ziegler
Physics Department

Amanda L. Donges
Midshipman First Class
United States Navy

A Multinational Empirical Analysis of Humanitarian Assistance

In an age of globalization, the development of productive nations is paramount. Over the past century, the United States has worked to aid in the advancement of underdeveloped countries, with the hope of expanding trade and fostering worldwide growth. We strive for the goal of world prosperity through the implementation of numerous political and economic tools. Humanitarian is a means by which we facilitate progress around the globe.

The distribution of humanitarian aid is a complex and daunting action for any country to take. The U.S., if choosing to offer aid to a country, must determine which form of assistance is most beneficial. Relief must be operationalized and adapted to fit various forms of economic, political, societal, and cultural environments. The benefits accruing to humanitarian aid are also affected by the form in which the assistance is received. Certain countries may utilize monetary sums better than military assistance or tangible goods. As a result of country’s inabilities to properly utilize and exploit all forms of grants, the United States must weigh the gains of each type of humanitarian aid, and make a selection based on these findings.

In order to assist in making such determinations, and in assessing the merits of particular types of humanitarian aid, I intend to gather data that illustrates the trends and behaviors of potential aid recipients, and then construct an econometric model of the impact of aid on per capita GDP growth and other measures of well being.

The methods I intend to use are extensive, but based upon basic statistical tools, such as Multiple Linear Regression Models, time-series analysis, and hypothesis testing. In addition, econometric tools will aid in the comparison and evaluation of possible courses of action that the United States may take. Vector Autoregression (VAR) Models are well equipped for forecasting the response of variables, such as per capita GDP growth to changes in humanitarian aid policy. Furthermore, causality tests can be used to reveal the casual relationship between variables over time, and provide insight into such questions as: does the political state of the country determine the effectiveness of aid, or does the effectiveness of aid determine the political state of the country?

Statistical software packages, such as SAS Version 8 and Enterprise Guide are the backbone of this study’s regression generation. Both of these programs will help take a multitude of data gathered from the World Development Indicators and the PRS Group, and transform it into a clean set of simultaneous equations exploring the nature of humanitarian aid. In the end, the estimated equations will provide potential answers to various questions. First, does the form of humanitarian aid offered play a role in its affect? Second, do country specific variables, such as political conditions, influence the performance of humanitarian aid, and if so, to what extent? Finally, one can make a determination based on the formulated models, if it is beneficial to offer a certain category of country aid, or if it is not in the beneficiary’s best interest.

A study of this nature should provide a resource for potential benefactors to consult. If they are interested in providing assistance to a low-income country, the benefactor can refer to the regressions corresponding to a sample low-income country set. Based on the statistical data and macroeconomic theory, such a person can determine what form of humanitarian assistance should be granted, and how it will affect a country. Furthermore, the given benefactor can use the values of the countries independent variables to compute the values of its own dependent variables, providing supplemental information. With this knowledge, one can make key decisions regarding assistance.

Assistant Professor Matthew J. Baker
Economics Department
Associate Professor Gary O. Fowler
Mathematics Department

Benjamin A. Drew
Midshipman First Class
United States Navy

Underwater Gliders: Measurement Methods and Analysis 

The Autonomous Underwater Vehicle is an unmanned vessel that is used in many applications including offshore oil industry, marine biology research, and salvaging in an effort to replace divers. As today’s Naval Explosive Ordinance Disposal Units look for innovative, technological developments in minefield clearance and related missions clearing unexploded ordinance, the further employment of autonomous unmanned vehicles (AUV) is under strong consideration. Instead of developing systems of high complexity and cost, it is worthwhile to investigate the development of a clandestine AUV with a singular and simplistic mission. In this Trident Scholar Research Project, I propose to design and construct an engineering prototype miniature, efficient autonomous underwater vehicle, of dimensions consistent with the proposed mission, and to develop a control system that will allow it to search for a mine-like target. The vehicle and control will be designed to search in an energy efficient “gliding” configuration. Wings on the vehicle allow steer able gliding, which subsequently offers horizontal propulsion. As the AUV glides over an area, a fundamental sensor will control the search pattern (probably a magnetometer). At a particular depth, the AUV will use its energy source, a battery, or a buoyancy modification system to propel itself towards the surface and initiate the search pattern again. Using an accelerometer with two directions of inclination in order to measure pitch and yaw and through simple trigonometric equations, the system should be able to derive the orientation of the vehicle and recognizes any course changes that need to be made. A depth sensor will be added in order to maintain the search pattern in shallow waters. Finally, the use of a magnetometer (or compass) will be employed as a means of maintaining a search pattern turn rate and heading.

Fundamentally, for the purpose of this project, the AUV will consist of a small gliding body; a control system that executes a controlled search pattern and that initiates a propulsion system for the vehicle to recommence its search. The shape of the AUV is yet to be determined. With respect to the parameter uncertainties of the environment, emphasis will be placed on motion characteristics of vehicle body style, propulsion systems, and maneuverability. The external shape of the AUV is envisioned to be ray-like, similar to that of a triangle with a lifting surface, streamlined to avoid water resistance and simultaneously obtain the largest horizontal distance with the smallest vertical drop. The vehicle will contain an accelerometer to measure orientation with regard to gravity. In terms of maneuverability, predictive design equations involving dynamic motion parameters such as the effects of pitch, yaw, and vertical movement will be considered in order to maintain a level and defined approach across a search area. This Trident Project is an extension of ongoing AUV developmental prototypes under the direction of Associate Professor Carl E. Wick and Assistant Professor Daniel J. Stilwell, of the Weapons and Systems Engineering Department and in conjunction with the Naval Architecture and Ocean Engineering Department.

Associate Professor Carl E. Wick
Weapons & Systems Engineering Department

Tarek S. Elmasry
Midshipman First Class
United States Navy

Characterization of an Optical Self-homodyne DPSK Receiver

The goal of this project is to create a digital optical demodulator which can process information propagating at high data rates in an optical fiber. By being able to process data at a higher rate at the receiver more data can be sent over a transmission line. Most demodulators in commercial applications utilize electrical systems. Thus when an optical signal propagating in a fiber optic channel arrives at the receiver to be demodulated this conversion of the signal from an optical to an electrical format significantly slows down the processing speed. By optically processing the signal just before the electrical receiver, information can be transmitted at higher data rates. Many methods of building an optical demodulator have been tried using different digital signal processing techniques. In this project a demodulator will be designed for differential phase shift keyed (DPSK) data using self-homodyne detection without using a local oscillator. A local oscillator is often incorporated into the demodulator to provide a reference for the incoming data to be compared to so that this data can be interpreted. By ridding the receiver of the local oscillator the design of this receiver may be simpler than previously constructed receivers. In place of a local oscillator a 3-dB coupler and a delay line will be used. With this configuration the incoming signal will use itself as the reference.

Before a hardware design is implemented, a computer model of all components of the receiver will be generated to determine the expected performance of all receiver components. The software package that will be used for this project is very comprehensive but additional modeling may be needed for effects that the package may ignore. In the research, channel effects such as dispersion and attenuation will be examined. In the receiver, most of the attention will be given to the effects of improper constructive and destructive interference occurring where the signal is compared to the reference. Other effects that will be examined include polarization mismatching and noise processes in receiver components.

The primary measure of receiver performance is the bit error rate for transmitted data. Research will be conducted primarily by sorting through scholarly journals on this topic. Much of this research will consist of analyzing the propagation path for the information carrying electromagnetic wave upon which modulation will take place. When the model is complete the receiver will be built and tested. Thus far research on this topic has been entirely theoretical. Actually building the receiver and getting it to work would be a major step forward in this field of research.

If the receiver does not behave as predicted then the model will be modified to describe what is occurring. Determining whether or not the receiver is performing in accordance with the model will be a major challenge for this project, especially when experiments are done at high data rates. Successful performance of this demodulator could pave the way to more significant progress in the design of optical systems. The primary application of this receiver would be military and commercial sectors.

Associate Professor R. Brian Jenkins
Associate Professor Deborah M. Mechtel
Electrical Engineering Department

Edward H.L. Fong
Midshipman First Class
United States Navy

Acquisition of 3-D Map Structures for Mobile Robots

When a mobile robot is introduced into an unfamiliar environment, it must be able to successfully navigate in its surroundings in order to perform its given tasks. One example would be to assist soldiers and marines in their operations on urban terrain, moving ahead and sending back data such as maps, pictures, or environmental conditions. To do this, a robot must explore its environment, generate some sort of map of the world it "sees," and place itself accurately on that map. This Trident project is designing and implementing algorithms that will provide a robot with the ability to accomplish these tasks while moving around in an unfamiliar, urban environment.

Dr. Alan Schultz’s team at the Naval Research Laboratory has already incorporated some of these exploration and localization capabilities into a mobile robot located indoors. When placed in an unfamiliar environment, the robot generated a map of its surroundings through a technique called frontier exploration. It would create a map of the area within its sensor range, move to an unmapped region, map it, and continue this cycle until all the area within its traversable boundaries are accounted for. To resolve the errors that the robot encounters when it uses dead reckoning to calculate its position, they introduced a technique called continuous localization. This process required the robot to generate another map that it correlated with the main map and then re-plot its position on a frequent basis. Thus the mobile robot could robustly map and navigate itself in an unfamiliar and changing laboratory environment.

However, the environment outside differs to a great extent from the laboratory settings in which the robots were tested. There are differences in elevation, variations in ground composition, and the presence of non-ideal reflective surfaces. When outdoors, the robot must determine in which plane it is traversing (as opposed to the 2-D calculations in the laboratory environment). It must be able to account for ramps, curbs, hills, and ditches – realizing that if it is on a negative incline, the “barrier” it senses is actually the ground before it instead of an obstacle. When traversing from smooth pavement to another ground material (i.e. sand), the robot should be able to record and account for such a situation when calculating its position and movement. In addition, the outside environment also contains many objects that are poor reflectors of the robot’s sonar pulses (decreasing the effectiveness of its sensors).

The primary objective of this Trident project is to extend Dr. Schultz’s integration of exploration and localization for mobile robots so that it would work robustly in an outdoor setting. The first task is to develop multiple map structures that the robot can use to represent a 3-D environment. Using those structures, we can test to determine how well each one performs based on the amount of storage space needed, accuracy of the map, and the speed at which map correlations can be accomplished (for localization purposes). By comparing these results, we can determine which structure (or a combination of features from different structures) would best meet the needs of an outdoor robot. After deciding on the best way to represent the environment, we can then incorporate and test different localizations schemes to determine which method would be most efficient for the robot to use.

The goal of this research is to develop a 3-D map structure and localization algorithm for a mobile robot that would let it explore and map an urban-like environment on its own. Such a robot could prove very useful by searching and gathering information in dangerous and hostile areas.

Assistant Professor Frederick L. Crabbe
Computer Science Department

Benjamin M. Heineike
Midshipman First Class
United States Navy

Modeling Morphogenesis with Reaction-Diffusion Equations Using Galerkin Spectral Methods

The theory is that when stem cells are deciding when to differentiate, they receive chemical signals from “Morphogens”. When these morphogens occur in the right patterns, they give the signals that cause the cells to generate biological patterns. For instance the dappling pattern on certain leaves or on the skin of leopards can be reproduced with these equations. Perhaps the mechanism behind many biological patterns is less complicated than previously speculated.

These Partial Differential Equations are what we are using to model the interactions of our morphogens. These equations only take two processes into account: (1) The chemical reactions of the chemicals with each other, and (2) the diffusion of the chemicals across the tissue. (We leave out electromagnetic forces, internal cell structure, etc.). We would like to show that complex patterns can be produced assuming just these two processes. This model neglects some of the conditions known to be present in cells, for instance cell walls and electromagnetic forces, but we would like to show that these effects are negligible. This is a notion in mathematics referred to as complexity. Simple interactions on a local level giving rise to very complex patterns and self-organization at a global level. As we progress, we will improve the model with realistic coefficients, realistic functions f, realistic boundary conditions and if we have time, perhaps another dimension.

The method that we will use to simulate these equations is a computational method known as the Galerkin(or Spectral) Method. The method states that we can approximate Partial Differential Equations of certain types with an infinite set of Ordinary Differential Equations. We then will decide at what level we must truncate this infinite set to get an accurate enough description of the behavior of the solution while keeping our computing time within reasonable limits.

The primary ODE solver that we will use will be Matlab’s ODE 45, but we will compare our results with those obtained in Mathematica and with various Commercial Off the Shelf (COTS) programs that solve PDE’s (i.e., Femlab). We will write our code in Matlab, Maple, and Mathematica.

Some of the analyses that we will carry out will be:

>>> comparing our method to other methods to see how they compare in computing time, accuracy, and reproducibility.

>>> deciding how many ODE’s are needed to make the solution accurate enough.

>>> identifying conditions that lead to patterns.

>>> determining whether or not these conditions would be feasible in real reactions.

>>> creating interesting patterns with mathematics.

Professor Reza Malek-Madani
Associate Professor Sonia M. Garcia
Mathematics Department

Peter D. Huffman
Midshipman First Class
United States Navy

Towards Better Optical Limiters

Optical limiters are devices that transmit light at low intensities, but block high intensity light, effectively keeping the amount of transmitted light below a certain level. Optical limiters have a wide variety of applications, and are of special interest to the Navy for protecting optical sensors and human eyes from directed laser weapons.

The most successful optical limiters known are various forms of metallated phthalocyanines, in particular lead and indium phthalocyanines. Yet further improvements are needed. The first part of this project involves the synthesis and examination of phthalocyanines metallated with thallium, whose proximity to lead and indium on the periodic table suggests that it will exhibit favorable optical limiting properties. As phthalocyanines are highly insoluble in many useful solvents, the synthesis will also include adding substituents to the phthalocyanine macrocycle in order to increase the solubility of the molecule for testing and application. The fact that unsubstituted thallium phthalocyanines have been made and reported in the literature suggests that the synthesis of substituted derivatives should be feasible.

The second approach to improving optical limiters is to control the aggregation of molecules in solution. Lead phthalocyanines show a maximum in their optical limiting properties as the concentration increases, but theses properties diminish at higher concentration. Scientists believe that the most successful optical limiter is a solution of lead phthalocyanines at such a concentration that the molecules aggregate primarily as dimers. The second part of this project involves the synthesis of covalently linked phthalocyanine macrocycles, which would essentially forces the molecules to dimerize in solution. These "clamshell" structures have been reported in the literature, but not for optical limiting applications.

The two prongs of the project may be merged if thallium phthalocyanines are found to be effective optical limiters, and the "clamshell" structures may be metallated with thallium. Based upon the optical limiting properties found for the "clamshell" structures, a tetramer phthalocyanine molecule may be synthesized (covalently linking four phthalocyanine monomers) which may act as a pair of phthalocyanine dimers.

So far, both t-butyl and cumylphenoxy substituted clamshell phthalocyanines have been synthesized. A metallation of the cumylphenoxy phthalocyanine dimer with lead appears successful based upon NMR and UV-VIS spectroscopy analysis. The maximum absorbance for the lead metallated phthalocyanine dimer (and a cumyl-phenoxy substituted lead phthalocyanine monomer) occurs at a longer wavelength than for similar phthalocyanine dimers metallated with other metals (around 720 nm). Preparations are being made to metallate simple phthalocyanine monomers with thallium and then proceed to a dimer metallation.

Professor Jeffrey P. Fitzgerald
Chemistry Department

Pritha A. Mahadevan
Midshipman First Class
United States Navy

Biophysical Characterization of a Bifunctional Iron Regulating Enzyme

Proteins are the most prevalent class of biological macromolecules, and are present in every form of life. Numerous biological products such as enzymes, hormones, antibodies, and muscle are examples of the diversity of protein function. Enzymes, however, are the most versatile, catalyzing virtually all cellular reactions.

Moreover, recent findings reveal that enzymes are even more versatile than originally thought. Although classical biochemistry has taught us that every protein has one corresponding gene and only one specific function, enzymes have been discovered that can interchange forms and conduct two distinct functions, depending on the cellular conditions.

One such enzyme with two specific functions is the mammalian iron responsive element (IRE) binding protein IRP-1. When free iron levels within the cell are low, IRP-1 plays a regulatory role to increase acquisition of iron and stimulate release of stored iron. But when iron levels return to normal, an iron-sulfur cluster forms within the protein and IRP-1 assumes its other form of cytoplasmic aconitase, an enzyme involved in energy metabolism.

We are interested in learning how this enzyme converts between its two forms. To do this, we will investigate how it acquires its three-dimensional structure, or folds, under different experimental conditions. Before the folding properties can be investigated, however, it will be necessary to synthesize and purify the enzyme. The enzyme to be studied is the human form of the protein, but it will be produced from bacteria using recombinant DNA methods. The gene for the protein is encoded in a plasmid inserted into a non-virulent laboratory strain of E.coli. A plasmid is an extra piece of genetic material that allows the protein to be expressed in greater quantities than would occur naturally.

The bacterial cultures are grown in a broth, and then the cells are harvested by centrifugation. The cells collected are broken open chemically, and the aconitase protein is selectively removed from the mixture of cellular products using an affinity tag that has been placed at the beginning of the protein. The tag causes the protein to bind strongly to Ni ions that have been attached to a solid resin. The protein is additionally purified as necessary, and the purity of the protein is assessed by means of gel electrophoresis.

A series of experiments has been designed to characterize this bifunctional protein. Using a variety of spectroscopic and enzymatic methods, the thermodynamic and kinetic properties of this important and interesting protein will be investigated.

The equilibrium properties will be analyzed by denaturing the aconitase both chemically and thermally. Aconitase’s iron-sulfur center provides a maximum absorbance wavelength of 450 nm when fully functional, and as the aconitase becomes denatured due to various experimental techniques, the absorbance of light at a specific wavelength can be measured. Analysis of the spectroscopic signal will make it possible to determine an equilibrium constant for the folded and unfolded forms under various conditions. These equilibrium constants can then be used to determine how stable the protein is by calculating its Gibbs free energy.

Another interesting area of study is the kinetics of the conversion of aconitase. Standard enzyme kinetic analysis techniques will be used to understand the factors that govern the interconversion between the two forms of the protein.

Recent biological research shows strong indications that nitric oxide and hydrogen peroxide will induce the removal of the iron-sulfur center. The nature of interactions between the iron-sulfur proteins and oxidants will be analyzed in order to better understand the role of the aconitase protein and investigate the possibility that the iron-sulfur centers serve in a capacity of stress regulation.

The importance of protein characterization research cannot be emphasized enough. Now that the Human Genome Project has revealed the sequences of all our genes, our challenge is determine the structures, functions and regulation of the proteins they encode. The future of medicine will lie in our ability to understand and correct genetic errors that result in improper protein production in the human body.

Assistant Professor Virginia F. Smith
Chemistry Department

Jonathan P. Nelson
Midshipman First Class
United States Navy

Active Control of Fan Noise in Ducts Using Magnetic Bearings

The primary objective of this project is to demonstrate global noise attenuation of the fundamental frequency of a fan in an air duct. This will be achieved through the use of magnetic bearings. The project’s secondary objective is to attenuate broadband frequencies created by the fan at all points in space.

As a fan turns within a duct or pipe (i.e. industrial heating, ventilation, or air conditioning units) sound is created simply by virtue of the fact that pressure waves are created at the fundamental frequency of the fan. However, the fan also creates harmonics of the fundamental frequency. In addition, turbulent airflow through the fan will cause additional noise. Therefore, broadband frequencies of noise exist, all due to a fan blowing air.

Usually, rotating shafts are supported by conventional ball bearings. However, this project will use active magnetic bearings consisting of two radial bearings and one thrust bearing. The radial bearings will support the fan shaft radially while the thrust bearing will control axial movement. While the bearings’ main function is to support the fan shaft, it will also perform the critical function of active sound control.

Active sound control is the method of determining the frequency and amplitude of sound created and then using another source to output a sound wave of equal amplitude and frequency, but 180 degrees out of phase to achieve destructive interference. Microphones will be positioned at key places throughout the duct in order to observe the full extent of sound produced within the duct. Some of these microphones are connected to a Digital Signal Processor (DSP), a microprocessor, which contains the active sound control program needed to vibrate the fan shaft in a pattern opposite to the original sound patterns. Other microphones will be used to monitor the sound created before and after the active sound control system is turned on.

Although active sound control systems traditionally use a secondary speaker to cancel sound from the primary noise source, this experiment will use the fan equipped with magnetic bearings as the secondary speaker. As sound is emitted from the fan, the active sound control system will attempt to dampen the fan noise. Because the primary and secondary sound sources are collocated, control of sound throughout the duct is theoretically possible. This project will attempt to demonstrate this global control of noise experimentally.

Associate Professor John M. Watkins
Associate Professor George E. Piper
Weapons & Systems Engineering Department

Noah F. Reddell
Midshipman First Class
United States Navy

Covert Communication Utilizing Discretely Generated Chaos

In the past, chaos has often been overlooked and written off as random behavior due to noise. Now, exciting new insights in the field have lead to huge leaps in understanding during the latter half of the 20th Century. Mathematicians and engineers are even discovering ways to exploit certain properties of chaotic systems. One emerging example of useful chaos is the use of chaotic systems for communication.

Most of the work developing this idea has been done either on a purely theoretical basis or in component based electrical circuits that are not flexible or practical. The aim of my project will be to explore the advantages of communicating using a chaotic carrier, and to design and create such a system with the goal of covert military communication in mind.

The project will take a unique approach towards exploring the benefits of chaos. We will use digital signal processors for implementing chaotic systems. These high-speed processors will produce a chaotic carrier from software rather than in an electrical circuit. The use of digital signal processors will be more practical from an engineering standpoint and also very conducive to the research process.

Chaotic systems are a class of deterministic systems that are sensitive to slight variations of initial conditions as well as aperiodic. A system’s sensitive dependence to initial conditions results in the problem that the behavior of a system cannot be predicted for a significant time into the future. The state of a system for the next instant is completely deterministic, but in the long run it cannot be calculated with any degree of accuracy. These systems produce random-like behavior. I will be looking at both the frequency-domain and time-domain properties of chaotic systems and expect to find that using them for a message carrier will offer several advantages over traditional modulation schemes.

Initially, it seems useless to attempt communication using a chaotic carrier since the state of a chaotic system cannot be well predicted. On the other hand, if it could be done the natural properties of chaotic systems would be very beneficial for covert communication. Fortunately, a number of chaotic communication schemes are possible based on the property of synchronization.

Some chaotic systems can be synchronized with an identical system by allowing for some influence between the two. Both systems will remain chaotic, but one mirrors the other. Once synchronization is achieved, information can be sent. A transmitter’s output is modified by a message. Since the receiver follows what the transmitter’s state should be, it can detect the change caused by a message and thus extract the information from the chaotic signal. Meanwhile, the transmission continues to look like noise to an outside observer.

This project involves several investigations into the feasibility and performance of chaotic communication schemes. At first, all of my work will be done with a baseband transmitter and receiver, later more study will be done by up-shifting the frequency of the system to an intermediate stage suitable for radio transmission.

I am using two Texas Instruments Digital Signal Processors (DSPs) to numerically emulate a chaotic transmitter and receiver. I have started out with the classic Lorenz system, but expect to try others as well. For the base-band system, a wire from one DSP board to the other carries the transmission. I will spend a good deal of time analyzing the frequency domain properties of this signal. This is how I will determine the effectiveness of camouflaging the transmission among background noise. Artificial noise will also be incrementally added to construct bit error rate (BER) curves.

Upon completion of this project, I will have completed investigations in at least two new areas. One is the utilization of digital signal processing methods in the transmitter and receiver instead of analog circuitry. The second is performance analysis of the system from a communications perspective.

Associate Professor Erik M. Bollt
Mathematics Department
CDR Thaddeus B. Welch, III, USN
Electrical Engineering Department

Jeremiah J. Wathen
Midshipman First Class
United States Navy

 Optical Limiting Within Capillary Waveguides

Many applications — especially commercial fiber optical systems — demand practical optical limiters. These devices can protect optical sensors that can’t handle the intense light many optical systems carry. Optical limiters can be constructed from chemical compounds that display a nonlinear response to incident light. That is, they allow nearly full transmission of low-intensity light but deny the transmission of high-intensity light. The more intense incident light becomes, the less light a limiting compound will transmit. We hope to build fast-response limiters compatible with modern fiber optics systems.

We propose to construct optical limiters by filling tiny glass tubes, called capillaries, with nonlinear compounds. Through total internal reflection, these filled capillary tubes will act as optical waveguides. The glass capillary method offers important advantages. First, within such a capillary waveguide, the light/limiter interaction length can be extended indefinitely, increasing the limiting ability of nonlinear materials. Second, capillary waveguides will easily integrate into standard single-mode fiber optical applications. Third, these capillaries can have extremely small diameters. Thus, we can invoke a limiting response even against low intensity light by focusing the light into an intense beam inside a capillary waveguide.

The study will attempt to discover how untested limiting compounds respond to light. Nonlinear compounds limit due to a number of mechanisms. Molecular photon absorptions and refractive index changes both cause a nonlinear limiting response. While many different types of photon absorptions can occur, this study will focus specifically on two-photon absorption (TPA) and reverse saturable absorption (RSA). These two nonlinear mechanisms limit very quickly, providing limiting against very short-pulsed laser sources. Negative refractive index change, caused by increased kinetic activity of the limiting compound, is a slower acting phenomenon that can limit longer laser pulses.

Experiments will investigate two separate classes of nonlinear compounds. First, we will characterize the response of numerous lead and thallium phthalocyanines engineered by chemists at the Naval Research Laboratory (NRL) and at the U.S. Naval Academy. These molecules have previously demonstrated excellent TPA and RSA responses. Other tests will characterize the responses of various gold cluster/alkanethiol solutions. Such solutions should demonstrate limiting due to negative refractive index change.

Experiments will test these molecules’ responses against laser light over broad range of wavelengths, intensities and pulse widths. Tests will place specific interest in the limiting response observed against near-infrared wavelengths, as most fiber optics systems operate in this range.

We intend to accomplish a number of objectives. We intend first to engineer a single-mode capillary housing to incorporate into fiber optics systems. Once we achieve such a design, experiments will characterize the responses of a number of the compounds within these single-mode housings. We will measure the raw limiting effect of the nonlinear materials and attempt to determine by which mechanism each different limiter acts. By mixing the different limiters we study, we hope to engineer a viable limiter that can limit against a broad range of wavelengths and pulse widths. Finally, if time allows, we may investigate limiting responses from compounds housed within an array of coupled waveguides.

Assistant Professor James J. Butler
Physics Department
go to Top