Operations Research and Statistics Seminar
Spring 2021

May03

TBAStephen WrightUniversity of WisconsinLocation: meet.google.com/junosjvadyTime: 03:45 PM

Apr26

MCM CompetitionMIDN Andi Alonsocintron, MIDN Ahmed Baameur, and MIDN Michael WhiteUnited States Naval AcademyLocation: meet.google.com/junosjvadyTime: 03:45 PM
View Abstract
MIDN Alonsocintron, MIDN Baameur, and MIDN White will present the model that they developed for the Mathematical Contest in Modeling.

Apr19

TBAFlorian PotraUMBCLocation: meet.google.com/junosjvadyTime: 03:45 PM

Apr05

Finding Small Cuts Using Distance FunctionsDr. R. RaviCarnegie Mellon UniversityLocation: meet.google.com/junosjvadyTime: 03:45 PM
View Abstract
The natural linear program for minimum stcut in a directed graph using distance variables on arcs can be shown to be integral by choosing a random level cut defined by all nodes within a random cutting radius between 0 and 1 from the source. I will explain how extensions of this idea lead to simple 2 approximations for multiway cuts in undirected (folklore) and directed graphs (Chekuri & Madan SODA 2016), a 2 ln k  approximation for multicuts involving k terminals in undirected graphs (Calinescu, Karloff & Rabani, SICOMP 2004) and finally a bicriteria approximation for a node “double downgrading” version of the interdiction problem for st cuts (joint work with Aissi & Chen, available at https://arxiv.org/abs/1911.11229).

Mar29

A new road to polynomial methods for Linear Optimization?Professor Kees RoosDelft University of TechnologyLocation: meet.google.com/junosjvadyTime: 03:45 PM
View Abstract
We introduce a new variant of Chubanov’s method for solving homogeneous linear systems with positive variables. In the Basic Procedure we use a recently introduced cut incombination with Nemirovski’s MirrorProx method. We show that generating the new cut requires at most O(n^3) time, just as Chubanov’s cut; despite this, the new cut is always at least as sharp as the one of Chubanov, as we also show. Our Modified Main Algorithm is in essence the same as Chubanov’s Main Algorithm, except that it uses the new Basic Procedure as a subroutine. The new method has O(n^{4.5} L) time complexity. We show that a simplified version of the new Basic Procedure competes well with the Smooth Perceptron Scheme of Pe ̃na and Soheili and, when combined with Rescaling, also with two commercial codes for linear optimization, namely Gurobi and Mosek.

Mar22

Statistical Disclosure Limitation of MicrodataDr. Anna OganyanNational Center for Health StatisticsLocation: meet.google.com/junosjvadyTime: 03:45 PM
View Abstract
Data collecting statistical agencies have an obligation by law to protect the privacy and confidentiality of individuals who entrust their data to an agency. On the other hand, there is a growing demand for accurate and timely public data for research, policymaking, and education. Statistical disclosure limitation (SDL) is an interdisciplinary area that tries to meet these two conflicting goals, namely provide access to the databases, allowing the researchers to get accurate results of statistical analyses, and at the same time protect the confidentiality of data providers. In this talk, we will present the key concepts of microdata protection, where microdata means individual records. We will also describe some examples of SDL methods. These methods have two goals: to minimize the risk of disclosure of respondents' confidential information and at the same time maximize the utility of the released data for the user. To decrease the disclosure risk, the data protector typically has to perturb microdata in some way, which often leads to the decreased utility of the resultant data to the user. On the other hand, the efforts to improve the utility of the perturbed protected data can lead to the increased disclosure risk. Hence, a tradeoff between data utility and disclosure risk is the main issue of SDL practice. The decision on how to define and measure data utility and disclosure risk is among the most important ones of SDL practice. We will give examples of such definitions and discuss their advantages and disadvantages. We will conclude the talk by presenting several SDL methods suitable for the protection of microdata and discuss their effectiveness based on the metrics of utility and disclosure risk mentioned above.

Mar15

CHUPPET: Learn Simply with Simple Data ScienceMatt PowersUnited States NavyLocation: meet.google.com/junosjvadyTime: 03:45 PM
View Abstract
Data overload occurs when organizations are drowning in the amount of available information such that they gain little insight under current business practices. Commercial machine learning software offers solutions to this overload, but bureaucratic, budgetary, and/or technical limitations may prevent the ability to leverage such software. A solution to these limitations is the Excelbased Content Heuristic Unstructured Parsing and Predictive Electronic Tool (CHUPPET), and the followon tool CHUPPET Next. CHUPPET and CHUPPET Next identify relevant themes within relatively large sets of text collections while mitigating the effect of analyst bias or lack of subject matter experience. Widely accepted machine learning, data mining, and classification techniques discriminate between relevant terms while quantifying the relevance and document sentiment so that objective trends are identifiable. The CHUPPET tools enable analysts with varying levels of technological experience to employ rigorous computational methods to unstructured, textual data. Excel is commonly available, familiar to many, CHUPPET and CHUPPET Next require no special installation, require minimal training, and are programmed in the Visual Basic for Applications language. The CHUPPET tools are regularly used in multiple Joint Staff Directorates and the Center for Army Lessons Learned to generate periodic reports and to rapidly respond to leadership questions, and they have provided impactful analysis for organizations across the Services and Combatant Commands. CHUPPET earned its developer personal recognition from the Chairman of the Joint Chiefs of Staff, General Joseph Dunford, USMC.

Mar08

On the Statistical Complexity of Reinforcement LearningDr. Mengdi WangPrinceton UniversityLocation: meet.google.com/junosjvadyTime: 03:45 PM
View Abstract
Recent years have witnessed increasing empirical successes in reinforcement learning (RL). However, many theoretical questions about RL were not well understood. For example, how many observations are needed and sufficient for learning a good policy? What is the regret of online learning with function approximation in a Markov decision process (MDP)? From logged history generated by unknown behavior policies, how do we optimally estimate the value of a new policy? In this talk, I will review some recent results addressing these questions, such as the minimaxoptimal sample complexities for solving MDP from a generative model, minimaxoptimal offpolicy evaluation by regression, and regret of online RL with nonparametric model estimation.

Feb22

Incentivizing Supply Chain Resiliency to Prevent Drug ShortagesDr. Emily TuckerClemson UniversityLocation: meet.google.com/junosjvadyTime: 03:45 PM
View Abstract
Over the past decade, hundreds of medically necessary drugs have experienced shortages, often caused by supply chain disruptions. Disruptions can be mitigated if companies maintain redundant facilities or hold inventory, though in their absence, there is little a manufacturer can do to adjust. Generic pharmaceutical companies rarely have either, and shortages last over a year on average. We consider two key questions: is it optimal for companies to maintain vulnerable supply chains and what policies would be effective at reducing shortages? To address these, we develop multistage stochastic programs to optimize the design of a pharmaceutical company’s supply chain under uncertainty in disruption and recovery. Components include suppliers of active pharmaceutical ingredients (APIs), manufacturing plants, and lines. The supply chain design models are among to consider disruptions at multiple echelons and disruptions and recovery over time. We solve the model with a static supply chain configuration using Sample Average Approximation, and a dynamic version using the Stochastic Dual Dynamic Integer Programming algorithm. We study the effects of policies proposed to reduce shortages on supply chains of example oncology drugs.

Feb08

Solving Sparse Linear Systems via the Sparse Exact (SPEX) Framework: Moving Towards Truly Exact OptimizationDr. Chris LourencoUnited States Naval AcademyLocation: meet.google.com/junosjvadyTime: 03:45 PM
View Abstract
Solving sparse linear systems is a fundamental problem in mathematics, computer science, and operations research. In the context of mathematical programming, various factorization based approaches are used to solve the sparse linear systems which arise within many optimization algorithms. Though most solvers operate exclusively in double precision, numerical instability, theoretical problems, and certain applications where more precision is needed are increasingly forcing solvers to move towards quad precision (Gurobi) or even fully exact solutions (Soplex and QSopt). In this talk we present a framework for exactly solving sparse linear systems like those which arise in mathematical programming. Unlike the rationalarithmetic approaches that exact solvers currently use, our algorithms operate exclusively in integerarithmetic and have a tighter complexity bound. Additionally, we provide computational results in which we benchmark the alternate methods of rationalarithmetic LU/LDL and exact iterative methods; illustrating that our approach is indeed the stateoftheart for exactly solving sparse linear systems. The talk concludes with a discussion of some future research areas including applications in interior point methods and graph theory.

Jan25

Kernelbased InteriorPoint Methods for Linear Complementarity ProblemsDr. Goran LesajaUnited States Naval AcademyLocation: meet.google.com/junosjvadyTime: 03:45 PM
View Abstract
Linear Complementarity Problems (LCP) is an important class of problems closely related to many optimization problems. Thus, efficient algorithms for solving LCP are of interest for theoretical and practical purposes. InteriorPoint Methods (IPMs) have revolutionized optimization theory and practice in the last three decades. It is wellknown that smallstep IPMs have better theoretical iteration complexity than longstep methods. However, the practical performance of these two types of IPMs is the opposite, longstep methods work much better in practice than smallstep methods. J. Renegar called this dichotomy “irony of IPMs”. A significant theoretical effort has been made to resolve this ’irony”, that is, to design longstep methods with improved theoretical iteration complexity which eliminates or reduces the gap between iteration complexities of small and longstep IPMs. In this talk, the Feasible IPM based on the class of eligible kernel functions is presented. It is shown that the algorithm is valid and globally convergent. Furthermore, a scheme has been developed to streamline calculations of iteration bounds and then used to calculate iteration bounds for ten specific eligible kernel functions. For some of the analyzed eligible kernel functions, the longstep variant of the algorithm exhibits improved iteration complexity which significantly reduces the gap between iteration bounds of small and long step variants of the method.