Details

Title:The USNA Computer Science Department Technical Report Series Information and Guide
Authors:Crabbe, Frederick L.
Serial Number:2005-01
Publication Date:2- 2-2005
Abstract:In January of 2005, the Computer Science Department at the United States Naval Academy launched its own technical report series. The management of submissions and queries is performed automatically with Python scripts. This document describes how the system works and how it should be used to submit technical reports and retrieve reports already submitted.
View ReportView bibtex

Title:The McCallum Projection, Lifting, and Order-Invariance
Authors:Brown, Christopher W.
Serial Number:2005-02
Publication Date:4-22-2005
Abstract:The McCallum Projection for Cylindrical Algebraic Decomposition (CAD) produces a smaller projection factor set than previous projections, however it does not always produce a sign-invariant CAD for the set of input polynomials. Problems may arise when a (k+1)-level projection factor vanishes identically over a k-level cell. According to McCallum's paper, when this happens (and k+1 is not the highest level in the CAD) we do not know whether the projection is valid, i.e. whether or not a sign-invariant CAD for the set of input polynomials will be produced when lifting is performed in the usual way. When the k-level cell in question has dimension 0, McCallum suggests a modification of the lifting method that will ensure the validity of his projection, although to my knowledge this has never been implemented. In this paper we give easily computable criteria that often allow us to conclude that McCallum's projection is valid even though a projection factor vanishes identically over a cell. We also improve on McCallum's modified lifting method. We've incorporated the ideas contained in this paper into QEPCAD, the most complete implementation of CAD. When McCallum's projection is invalid because of a projection factor not being order-invariant over a region on which it vanishes identically, at least a warning message ought to be issued. Currently, QEPCAD may print warning messages that are not needed, and may fail to print warning messages when they are needed. Our implementation in QEPCAD ensures that warning messages are printed when needed, and reduces the number of times warning messages are printed when not needed. Neither McCallum's modified lifting method nor our improvement of it have been implemented in QEPCAD --- the design of the system would make implementing such a feature quite difficult.
View ReportView bibtex

Title:Robot Imitation Learning of High-Level Planning Information
Authors:Crabbe, Frederick L., Hwa, Rebecca
Serial Number:2005-03
Publication Date:5- 2-2005
Abstract:We present a system that enables a robot to learn to plan through demonstration and imitation. An imitator acquires planning operators by observing a demonstrator, segmenting the demonstrators actions into planning steps, and learning the preconditions and effects of the operators. When the imitator tries to execute its own plans, it learns to perform the operations through reinforcement learning, and corrects errors in the previously learned operator effects. Note: This document was originally written in August of 2003, but submitted as a tech report on the above date.
View ReportView bibtex

Title:Low Level Segmentation for Imitation Learning Using the Expectation Maximization Algorithm
Authors:Warner, Andrew D.
Serial Number:2005-04
Publication Date:5- 3-2005
Abstract:Imagine a robot that is able to develop skills on its own, without being programmed directly. This robot would be invaluable in any business, factory, or laboratory. Unfortunately, this problem, known as inductive learning, is very difficult, and has several varieties. One such is imitation learning. The overall process of imitation learning begins with one robot observing another robot performing a task. The watcher then breaks down, or segments, the demonstrating robot s actions into basic actions called planning units. Next the observing robot uses the planning units to create a plan that accomplishes the required task. The execution of a successful plan demonstrates that the robot has correctly implemented an inductive learning process. The scope of this research does not allow the problem of imitation learning to be discussed in its entirety; however, it does investigate an important subset of the larger problem. This paper focuses on the segmentation of the data, specifically how to break it up into the steps that provide the building blocks of the robots ultimate plan.
View ReportView bibtex

Title:An Algorithm for Improving System Safety via Software Fault Trees
Authors:Jones, Sean A., Needham, Donald M. (Advisor)
Serial Number:2005-05
Publication Date:5-26-2005
Abstract:Analysis of software fault trees exposes hardware and software failure events that can lead to unsafe system states, and provides insight on improving safety throughout each phase of a system's development. Although fault trees can be pruned for low severity and low probability nodes, few techniques exist for systematically improving system safety by focusing on cost analysis of a system's fault tree nodes. This paper presents an algorithm for system failure mitigation, supportive of continuous software evolution, based on the reduction of a fault tree into a polynomial expression of degree g, where g is the number of inputs. We combine cost functions that model the expense of improving component reliability into a vector field which provides a measurement of the degree of diffculty of system improvement. The gradient of the vector field is evaluated for vectors providing steep assent towards the area of greatest safety improvement, which in turn provides guidance on improving design time system safety. We provide an example application of our improvement algorithm, and examine improvement verification of the resulting system modifications.
View ReportView bibtex

Title:Establishing Correspondence among Shared Information and Tasks
Authors:Childers, Candace M.
Serial Number:2005-06
Publication Date:6- 7-2005
Abstract:Creating interoperability among heterogeneous systems enhances our military's war-fighting capabilities. Differences in hardware, languages, and data models make interoperability hard to achieve. The Object-Oriented Method for Interoperability (OOMI) resolves modeling differences among systems through construction of a Federation Interoperability Object Model (FIOM) used to capture information and tasks shared among systems. The FIOM is constructed in either a bottom-up or top-down fashion using the OOMI Integrated Development Environment (OOMI IDE) and includes both component system and standard representations of the shared tasks and information. When constructing a federation of interoperable systems, a correspondence must first be established among shared tasks and information before data modeling differences can be resolved. The OOMI IDE uses both semantic and syntactic correlation methodologies for establishing such correspondences. Syntactic correlation is performed using neural networks. Syntactic data concerning the structure and signature of shared information and tasks is used to create discriminator vectors for objects being compared. Neural Networks are used to compare these discriminator vectors to determine the degree of similarity among objects. A ranking of the scores returned from the neural network comparison is used to assist an interoperability engineer in identifying corresponding objects for which modeling differences can be resolved.
View ReportView bibtex

Title:On Quantifer Elimination by Virtual Term Substitution
Authors:Brown, Christopher W.
Serial Number:2005-07
Publication Date:8-24-2005
Abstract:This paper presents a new look at Weispfenning's method of quantifer elimination by virtual term substitution and provides two important improvements. Virtual term substitution eliminates a quantifed variable by substituting formulas in the remaining variables for each atomic formula in which the quantifed variable appears. This paper investigates the polynomials that arise in substitution formulas Weispfenning proposed and, based on this examination, provides a simpler substitution for the general case, and alternate substitutions for several commonly occurring situtions. Providing alternate substitutions allows virtual term substitution to make choices that produce simpler output.
View ReportView bibtex

Title:Midshipmen Blue Force Tracking
Authors:Evans, Paul K., Stahl, David J.
Serial Number:2005-08
Publication Date:12-13-2005
Abstract:This project explores the feasibility of networking Windows CE based handheld devices using inexpensive off-the-shelf hardware and software systems to provide Midshipmen with a tactical training system simulating the FBCB2 system - Force XXI Battle Command, Brigade and Below ("Blue Force Tracking"). In conjunction with the YP Tactical Data Simulator, "Midshipman Blue Force Tracking" is intended to be used as a pedagogical tool for educating midshipmen in the concepts of Network Centric Warfare and operations.
View ReportView bibtex

Title:Software Fault Tree Key Node Metric Test Cases
Authors:Needham, D M, Jones, S A
Serial Number:2006-01
Publication Date:4-25-2006
Abstract:This report contains 70 sets of software fault trees used to test a software fault tree key node safety metric.
View ReportView bibtex

Title:IPv6 Testing
Authors:Landis, Christopher B
Serial Number:2006-02
Publication Date:5- 8-2006
Abstract:The DoD is committed to transitioning to Internet Protocol version 6 (IPv6) by 2008. Transitional working groups are under way to develop plans to implement IPv6 on Government networks. The DoD is also forming working relationships to conduct testing and share information.
View ReportView bibtex

Title:High Energy Laser Progressive Wavefront Modeling
Authors:Izbicki, Michael J., Needham, Donald M. (Advisor)
Serial Number:2006-03
Publication Date:12- 8-2006
Abstract:High energy lasers have the potential to revolutionize naval warfighting by providing a weapons platform that has greater precision and speed than anything currently available. These lasers can be mounted on ships for surface warfare or mounted on satellites for strikes anywhere around the world. Crucial to the development of these lasers is an understanding of how different atmospheric conditions affect the laser’s propagation and the shape of the beam when it finally illuminates the target. Dr. Bill Colson from the Naval Postgraduate School Physics Department developed a computer model for simulating these beams; however, his program can only output two dimensional slices of the three dimensional laser. Theoretically, the beams should be forming “noodles” of energy that break off from the main beam, but that can be difficult to see from Colson’s original output. This project aims to modify Colson’s program so that it can create three dimensional models of the laser beams, and show the progression of the beams over time.
View ReportView bibtex

Title:Algorithmic Reformulation of Polynomial Problems
Authors:Brown, Christopher W.
Serial Number:2007-01
Publication Date:6-13-2007
Abstract:This paper considers the problem of existential quantifier elimination for real algebra (QE). It introduces an algorithmic framework for exploring reformulations of QE problems, with the goal of finding reformulations that make difficult problems tractable for QE implementations, or for which these implementations find simpler solutions. The program qfr is introduced, which implements this approach, and its performance on some example problems is reported.
View ReportView bibtex

Title:Mobile Vehicle Teleoperated Over Wireless IP
Authors:Akin, Micah, Crabbe, Frederick L.
Serial Number:2007-02
Publication Date:6-13-2007
Abstract:This document describes an effort to create a low cost teleoperated vehicle that is controlled via 802.11g Wifi and the Internet. It is unusual in that it can be controlled at a great distance but yet is still inexpensive. The appendices provide instruction on how to construct and operate the vehicle, along with the necessary source code.
View ReportView bibtex

Title:Internet Protocol Version 6 (IPv6): Capabilities, Operational Concepts, and the Transition from IPv4
Authors:Domagalski, Joshua E.
Serial Number:2008-01
Publication Date:8-27-2008
Abstract:Internet Protocol Version 6 (IPv6) has been proposed as a replacement to the current networking protocol, IPv4, used in the global Internet. IPv6 represents an entirely new protocol, incorporating improved routing capability, enhanced ability to support real-time audio and video traffic, better security and privacy, as well as a much larger address space. Technical challenges and operational requirements have hindered the Department of Defense from investigating and testing IPv6 on a large scale. To help facilitate the conversion to IPv6, we have built, tested, operated and maintained a pilot IPv6 network between the United States Naval Academy in Annapolis, Maryland and the United States Military Academy in West Point, New York. This network hosted multiple operating systems and employed a Domain Name System server, a web server, a File Transfer Protocol server, a Dynamic Host Configuration Protocol server, in addition to multiple PC and Unix-based clients. This network was then used to provide validation and refutation of operational concepts developed for the transition from IPv4 to IPv6. This research included investigations of network management, address allocation, Domain Name Services, protocol capabilities and communication, as well as transition techniques for migrating from the IPv4 protocol to IPv6. We present the results of our experience in deploying and testing an IPv6 network, discuss proposed best practices for the utilization of IPv6, and provide a list of lessons learned to include various incompatibility issues that were noted with many popular software applications. Finally, in exploring the added quality of service capability provided in IPv6, we present our preliminary results in testing and analyzing Voice over Internet Protocol on our IPv6 network.
View ReportView bibtex

Title:Internet Protocol Security (IPSEC): Testing and Implications on IPv4 and IPv6 Networks
Authors:Domagalski, Joshua E.
Serial Number:2008-02
Publication Date:8-27-2008
Abstract:The goals we established from the onset of the research study were threefold: 1)to test and analyze the implementation of IPSec over an IPv4 network, 2) to test and analyze the implementation of IPSec over an IPv6 network, and 3) to test and analyze the implementation of IPSec over an IPv4/IPv6 network. These were to be tested using common, current operating systems to include Windows XP SP2 and SUSE 10.3 Linux. Of particular interest, we noted that IPSec poses many problems with Network Address Translation and in handling the incompatibility between IPv4 and IPv6 packets (due to differences in the two protocol•s packet headers). In addition, we found that it is vital to understand the theory, implementation, and application of IPSec for the purposes of authentication and encryption.By the conclusion of the study, we were able to create a fully functional IPv4 protocol that implemented IPSec for file transfers and ICMP packets. Although we attempted to implement IPSec on an IPv6 network, we found that neither Windows XP Service Pack 2 nor Novell•s SUSE 10.3 fully support IPSec on IPv6 networks. In addition, many of the limitations that hinder the implementation of IPSec are not caused by faults in the protocol itself, but rather by problems with the applications utilizing the protocol.
View ReportView bibtex

Title:Lessons Learned in Transitioning from Internet Protocol Version 4 (IPv4) to Internet Protocol Version 6
Authors:Domagalski, Joshua E.
Serial Number:2008-03
Publication Date:8-27-2008
Abstract:Published in Proceedings of the National Conference on Undergraduate Research April 10-12 2008, Salisbury MD. Internet Protocol Version 6 (IPv6) has been proposed as a replacement to the current networking protocol, IPv4, used in the global Internet. IPv6 represents an entirely new protocol, incorporating improved routing capability, enhanced ability to support real-time audio and video traffic, better security and privacy, as well as a much larger address space. Technical challenges and operational requirements have hindered the Department of Defense from investigating and testing IPv6 on a large scale. To help facilitate the conversion to IPv6, we have built, tested, operated and maintained a pilot IPv6 network between the United States Naval Academy in Annapolis, Maryland and the United States Military Academy in West Point, New York. This network hosted multiple operating systems and employed a Domain Name System server, a web server, a File Transfer Protocol server, a Dynamic Host Configuration Protocol server, in addition to multiple PC and Unix-based clients. This network was then used to provide validation and refutation of operational concepts developed for the transition from IPv4 to IPv6. This research included investigations of network management, address allocation, Domain Name Services, protocol capabilities and communication, as well as transition techniques for migrating from the IPv4 protocol to IPv6. We present the results of our experience in deploying and testing an IPv6 network, discuss proposed best practices for the utilization of IPv6, and provide a list of lessons learned to include various incompatibility issues that were noted with many popular software applications. Finally, in exploring the added quality of service capability provided in IPv6, we present our preliminary results in testing and analyzing Voice over Internet Protocol on our IPv6 network.
View ReportView bibtex

Title:Mobile Networks in IPV6
Authors:Noronha, Sean
Serial Number:2009-01
Publication Date:1- 3-2009
Abstract:This research study, Mobile Networks in IPv6 with VOIP, was conducted at the United States Naval Academy (USNA) with the end goal of testing the capabilities of IPv6 in low bandwidth networks. Specifically, this study focused on the effects of using IPv6 on networks whose speeds are comparable to 56K Dial Up access, ISDN, and various other low bandwidth wireless networks. The first segment of this research project aimed to examine the feasibility of sending large amounts of data from high to small bandwidth networks. A significant portion of this project aimed to correlate MTU settings and transmission speeds. A hypothesis to be tested aimed to see if increasing or decreasing the MTU size would affect those speeds, but enough data was collected which showed that changing the MTU size only affected error rates, not transmission speeds. Tests were conducted to determine how line conditions would affect the way IPv6 would be transmitted across networks of varying bandwidth but extensive tests were unable to be conducted since legacy hardware was not able to be acquired. To help mitigate the hardware gap, a three-pronged approach of theory, simulation, and experiment setup was used to validate results. The second segment of this research project focused on the viability of incorporating separate IPv6 networks tunneled through existing IPv4 connections. Our recommendation to have a tunneled network is derived from the impracticality of completely transitioning over to IPv6 without having a “stepping stone” in case there are design flaws or the lack of personnel to support such a colossal move. The research shows that this is a more palatable concept for future development as only simple knowledge of implementing a tunnel and appropriate hardware compatibility are required.
View ReportView bibtex

Title:Resource Contrained Network Design and Implementation
Authors:Gawne, Michael B.
Serial Number:2009-02
Publication Date:1- 3-2009
Abstract:The research study, Resource Constrained Network Design and Implementation, was conducted at the United States Naval Academy (USNA) in order to simulate creating a secure and functional network with constrained resources. This is a problem faced by any administrator trying to maximize the security and functionality of a private network while minimizing the costs involved in the creation and maintenance of a local area network (LAN). The goals of this research study were established in a step-wise foundation, each step relying upon the completion of the previous goal. The goals were: 1) Configure the most basic LAN, one comprised of a switch and two nodes and establish communication between the two nodes, 2) Introduce a 3600 series router into the network dividing the two LANs to simulate an internal network LAN, and the external internet and permit communications between the two networks, 3) Create Virtual LANs (VLAN) and implement their use on the network, 4) Utilize Access Control Lists with rulesets based upon the VLANs, 5) Introduce a PIX 515E firewall into the network to isolate and protect the internal LAN from the simulated internet. At the conclusion of the study it was deemed that education and knowledge levels must be included when considering all the resources available to the network administrator. By far, the most difficult segment of this research project was the last segment, introducing the PIX 515E firewall into the network and establishing communications through the firewall. Internet resources are plentiful in the form of forums, web blogs, and other information technology support sites for novice network administrators to educate themselves on the basics of configuring their network, but online education for establishing a firewall is not as plentiful. Furthermore, even with the assistance of two Cisco Certified Network Associates (CCNA), a mid-level certification, and a Cisco Certified Internetwork Expert (CCIE), it is sometimes difficult to find the one error that drops all traffic and renders your network isolated from the rest of the world. One of the most beneficial ways to spend precious resources may very well be a seminar or other education venue where the network administrator may first hone his/her skills on firewall configuration before attempting to configure the firewall with no formalized training. Through the review of this research study it appears that while a secure, effective network can be constructed in a resource constrained environment, especially when resources are properly allocated.
View ReportView bibtex

Title:Algorithm MinWtBasis for simplifying conjunctions of monomial inequalities
Authors:Brown, Christopher W.
Serial Number:2010-01
Publication Date:1-28-2010
Abstract:This paper defines a new algorithm "MinWtBasis" which simplifies conjunctions of monomial inequalities. The simplified equivalent formula produced by MinWtBasis minimizes the sum over all inequalities in the conjunction of the number of non-strict variables appearing, and it runs in polynomial time. For strictly non-strict conjunctions of inequalities, this shows that the problem of finding a simplest equivalent formula is in P. This contrasts with the general case and the strict inequality case, in which finding the simplest equivalent formula is NP-Hard.
View ReportView bibtex

Title:An Analysis of Root-Kit Technologies and Strategies
Authors:Monroe, Justin
Serial Number:2010-02
Publication Date:3- 9-2010
Abstract:The research study, An Analysis of Root-Kit Technologies and Strategies was conducted at the United States Naval Academy in an effort to help define a root-kit in terms understandable by someone with a background in computing knowledge, but not necessarily with the details of how an operating system is run. Specific topics cover basic back doors into a target system, covert channels, data exfiltration, and hiding software applications in the best way possible for the level of access attained. Because root-kits are becoming more commonplace on the Internet, the Department of Defense must be able to convey the importance of Information Assurance when applications such as root-kits can be installed by any number of ways. Once a root- kit is on the machine, it becomes increasingly hard to trust any information on the machine, and should the root-kit exfiltrate any information, it may be hard to figure out what information was stolen, and how to mitigate the risks involved. The goals of the research paper were to define root-kit strategies in easy to understand phases, ranging from commonly found network tools and source code to implementation strategies of today's modern root-kits and root-kit prevention and mitigation systems. The source code contained in the paper references quick implementations of keyloggers and DLL injectors, two common applications found in a root-kit toolset to hide in the system and then log the user's habits. At the conclusion, several root-kit papers were analyzed and cataloged as they pertained to the different phases that were set up initially. Each and every tool utilized in the research study is freely available and has other, less malicious purposes. However, the research topics discussed in Phase 6, Advanced Root-Kit Implementations are current research into how to prevent root-kit installation, and to minimize the effectiveness of a root-kit. The most interesting part is that several of the projects utilize hooking and patching, two common root-kit practices to subvert the operating system to prevent root- kits from executing.
View ReportView bibtex

Title:Virtualization Shares: Feasibility and Implementation in the USNA Computer Science Department
Authors:Christopher Wheeler
Serial Number:2010-03
Publication Date:3-10-2010
Abstract:The research study, Virtualization Shares: Feasibility and Implementation in the USNA Computer Science department was conducted at the United States Naval Academy in an effort to help define a how sharing virtual machines which had been transferred via external hard drive from host to host, and run on VMware workstation, could be run on a single powerful server and require users to interact with them using a thin client. Specific topics cover basic virtualization concepts, differences in architecture between Xen and VMware, and the performance seen on a test network utilizing one server running ESX. As corporations and other large enterprises, including the Department of Defense, move from the traditional physical server infrastructure towards virtual consolidation, study in this area becomes more and more pertinent. In the USNA Computer Science Department, this server resides on a sandboxed network, used only for testing purposes, but this technique has been implemented across many major organizations running servers as a result of low utilization of traditional physical infrastructure. Using a virtualized architecture allows more dynamic load sharing based on the current demands placed on a particular host, and overall results in less idle time on the infrastructure. The goal of this research paper was to define potential architectures that satisfy our existing needs, including labs for Information Assurance classes, exercises such as Cyber Defense Exercise, and development work. By analyzing their relative performance, a compromise between performance, ease-of-use, and the resources of the Department provided recommendations that will become an integral part of Computer Science and Information Technology education. At the conclusion, numerous studies on both VMware and Xen architecture were analyzed, which gave insight into architectures to be modeled by the Department. For the purposes of research, Xen was focused on more heavily by nature of being open source. However, our current VMware license weds us to their infrastructure, the main reason for solely analyzing ESX. This study may also lead to further research into topic areas such as dynamic image swapping across multiple servers, vulnerabilities of virtualization shares, and even more utile architectures for the Department’s needs.
View ReportView bibtex

Title:Evaluating MLNs for Collective Classification
Authors:Crane, Robert J.
Serial Number:2010-04
Publication Date:12-13-2010
Abstract:Collective Classification is the process of labeling instances in a graph using both instance attribute information and information about relations between instances. While several Collective Classification Algorithms have been well studied, the use of Markov Logic Networks (MLNs) remains largely untested. MLNs pair first order logic statements with a numerical weight. With properly assigned weights, these rules may be used to infer class labels from evidence stated as logic statements. Our study evaluated MLNs against other Collective Classification algorithms on both synthetic data and real data from the CiteSeer dataset. As a whole, we encountered inconsistent and often poor performance with MLNs, especially on synthetic data where other Collective Classification algorithms performed well.
View ReportView bibtex

Title:Clustering Systems with Kolmogorov Complexity and MapReduce
Authors:Troisi, Louis R
Serial Number:2011-01
Publication Date:6- 2-2011
Abstract:In the field of value management, an important problem is quantifying the processes and capabilities of an organization's network and the machines within. When the organization is large, ever-changing, and responding to new demands, it is difficult to know at any given time what exactly is being run on the machines. So one could lose track of approved or, worse, not approved or even malicious software, as the machines become employed for various tasks. Moreover, the level of utilization of the machines may affect the maintenance and upkeep of the network. Our goal is to develop a tool that can cluster the machines on a network, in a meaningful way, using different attributes or features, and it does so autonomously, in a efficient and scalable system. The solution developed implements, at it's core, a streaming algorithm that in real-time takes meaningful operating data from a network, compresses it, and sends it to a MapReduce clustering algorithm. The clustering algorithm uses a normalized compression distance to measure the similarity of two machines. The goal for this project was to implement the solution and measure the overall effectiveness of the clusters. The implementation was successful in creating a software tool that can compress, determine the normalized compression distance and cluster the machines. More work however, needs to be done in using our system to extract more quantitative meaning from the clusters generated.
View ReportView bibtex

Title:Initial Report of the Dean's Cyber Warfare Ad Hoc Committee
Authors:Needham, Donald M, Vincent, Patrick J
Serial Number:2011-02
Publication Date:12-22-2011
Abstract:The Department of Defense has substantially increased the emphasis and investment being made to better position the military to operate effectively in a cyber warfare environment. Among the more notable recent developments have been the establishment of a new Cyber Warfare Command (USCYBERCOM) at Fort Meade, the establishment of a new Navy Fleet Cyber Command, as well as the reorganization of the OPNAV staff, merging N2 and N6 into a new information-warfare-centric organization. In reaching its findings, the Cyber Warfare ad hoc Committee analyzed how our sister service academies and civilian undergraduate institutions incorporate cyber warfare concepts into their curricula. The Committee also examined what educations and skills various graduate institutions consider necessary preparation for entry into cyber warfare related curricula at their institutions. The OPNAV and CMC staffs were asked for their perspectives on the education and training that our graduates should receive in order to help the Navy and Marine Corps in this area. This report presents the Committee's results.
View ReportView bibtex