Skip to main content Skip to footer site map
Mathematics Department

Applied Math Seminar

Fall 2019

All talks are from 12:00-1:00 p.m. in the Seminar Room CH351, unless otherwise specified.

  • Nov
    19
  • Fairness Among Strangers: Randomness in Public Goods Games
    Andrew Belmonte
    Dept of Mathematics / Center for Mathematical Biology Huck Institutes of the Life Sciences Pennsylvania State University

    View Abstract

    Evolutionary game theory introduces time into game theory in a natural way, modeling the dynamics of strategic choices and mutually competitive interactions using differential equations. Agent-based models can provide stochastic versions of this evolution while providing additional flexibility, although mathematical analysis can become more difficult. In this talk I will discuss the tension between individual, selfish motivation and group dynamics - social dilemmas - using both evolutionary game theory and agent-based models to address the interplay of freeloading off of others, the division of labor, and contributing to the public good.
  • Nov
    01
  • Data-driven modeling of geophysical turbulence using an integrated statistical physics-dynamical systems approach
    Pedram Hassanzadeh
    Rice University

    View Abstract

    Data-driven modeling of geophysical turbulence, mainly motivated by problems in weather/climate prediction, has been of great interest at least since the 1970s. Fluctuation-dissipation theorem (FDT), a powerful tool from statistical physics, has been particularly pursued as a mean of finding linear response functions (LRFs) for atmospheric/oceanic turbulence. However, while the calculated LRFs are often found to be accurate for low-dimensional toy models, they are not accurate for higher dimensional systems such as two-layer quasi-geostrophic (QG) models or general circulation models (GCMs). In earlier work (Hassanzadeh & Kuang, 2016 J. Atmos. Sci.), we showed that a major source of inaccuracy is a step aimed at regularizing ill-conditioned covariance matrices by truncating the data into the leading modes from proper orthogonal decomposition (POD), a.k.a empirical orthogonal functions (EOFs). We found that the error arises from using POD/EOF modes, which are orthonormal, for systems that have non-normal LRFs. I will present results from Khodkar & Hassanzadeh (2018 J. Fluid Mech.) and our more recent work in which we show the advantage of truncating data onto modes that are obtained from data-driven approximations of the Koopman operator such as dynamic mode decomposition (DMD) and time-delayed DMD. I will show how this approach substantially improves the accuracy of the computed LRFs for turbulent Rayleigh-Benard convection, QG, and an atmospheric GCM.
  • Oct
    25
  • Multigrid preconditioning for PDE-Constrained optimization: two new applications
    Andrei Draganescu
    University of Maryland, Baltimore County

    View Abstract

    We present two new applications of a multigrid preconditioning technique that was originally developed for certain classes of inverse problems, and then applied successfully to optimal control of partial differential equations. The first part of the talk will focus on optimal control problems constrained by elliptic equations with stochastic coefficients. Assuming a generalized polynomial chaos expansion for the stochastic components, our approach uses a stochastic Galerkin finite element discretization for the PDE, thus leading to a discrete optimization problem. The key aspect is solving the potentially very large linear systems arising when solving the system representing the first-order optimality conditions. We show that the multilevel preconditioning technique from the optimal control of deterministic elliptic PDEs has a natural extension to the stochastic case, and exhibits a similar optimal behavior with respect to the mesh size, namely the quality of the preconditioner increases with decreasing mesh-size at the optimal rate. Moreover, under certain assumptions, we show that the quality is robust also with respect the two additional parameters that influence the dimension of the problem radically: polynomial degree and stochastic dimension. In the second part of the talk we apply a similar technique to an optimization-based non-overlapping domain decomposition method for elliptic partial differential equations developed by Gunzburger, Heinkenschloss, and Lee (2000). While it is not surprising that, for a fixed partition in subdomains, the preconditioner leads to the expected behavior of increasing quality (lower number of iterations) as the resolution increases, it is remarkable that the quality of the preconditioner is relatively robust with respect to the number and configuration of subdomains.
  • Oct
    10
  • Optimal data acquisition for inverse problems under model uncertainty, with application to subsurface flow
    Alen Alexanderian
    North Carolina State University

    View Abstract

    We consider inverse problems that seek to infer an infinite-dimensional parameter from measurement data observed at a set of sensor locations and from the governing PDEs. We focus on the problem of optimal placement of sensors that result in minimized uncertainty in the inferred parameter field. This can be formulated as an optimal experimental design problem. We present a method for computing optimal sensor placements for Bayesian linear inverse problems governed by PDEs with model uncertainties. Specifically, given a statistical distribution for the model uncertainties we seek to find sensor placements that minimize the expected value of the posterior covariance trace; i.e., the expected value of the A-optimal criterion. The expected value is approximated using Monte Carlo leading to an objective function consisting of a finite sum of trace operators and a sparsity inducing penalty. Minimization of this objective requires many PDE solves in each step, making the problem extremely challenging. We will discuss strategies for making the problem computationally tractable. These include reduced order modeling and exploiting low-dimensionality of the measurements, in the problems we target. We present numerical results for inference of the initial condition in a subsurface flow problem with inherent uncertainty in the velocity field.
  • Sep
    06
  • Positive Definite Kernels - An Introduction for Machine Learning Applications
    Nicholas Wood
    USNA, Math Department
    Time: 12:00 PM

    View Abstract

    Positive definite kernels are advantageous for machine learning applications for at least two reasons. First, positive definite kernels make it possible to use linear methods to generate non-linear decision boundaries, and second, they provide a general framework in which data of any form can be used, e.g. text data, image data, graphical data, etc. In this talk, I'll first give examples of machine learning applications that highlight these two advantages. Following these examples, I will motivate the definition for positive definite kernels by looking at the axioms of the inner product, showing how the former follows somewhat naturally from questions about the latter. We will then define positive definite kernels and discuss methods for proving (or perhaps disproving) that a particular kernel is positive definite. The culmination of the talk will be the use of these methods to prove that the Tanimoto Kernel is a positive definite kernel.
  • Aug
    30
  • Row-Stochastic Matrix Nth Roots by Iterated Geometric Mean
    Gregory Coxson
    USNA, ECE Department
    Time: 12:00 PM

    View Abstract

    Credit-rating agencies sometimes employ Markov models for client transitions between credit ratings. The transition matrices employed in these models are row-stochastic. Given a transition matrix for a given transition period, one might need to compute a transition matrix for a shorter time period, requiring computation of a row-stochastic matrix Nth root. One option for computing Nth roots is the Iterated Geometric Mean (IGM) which generalizes the Arithmetic-Harmonic Mean, equivalent to the Geometric Mean. Under certain conditions, the IGM can be applied to matrix arguments to yield a principal Nth root of a matrix of interest. While the precise conditions for existence of such roots remains an open area of research, there are subclasses of the row-stochastic matrices for which principal Nth roots are guaranteed to exist. The structure of the IGM is such that for a given row-stochastic matrix, the process will preserve unit row sums at every step. While this feature makes the IGM promising for computing Nth roots of row-stochastic matrices, there is another property we need in these Nth roots -- non-negativity. This talk will examine conditions for convergence for complex number arguments and for matrix arguments, as well as conditions for returning a non-negative row-stochastic matrix Nth root. Thanks to my SEAP summer intern, Angeline Luther, for coding up an algorithm that computes the IGM for sets of matrix arguments of any given order. Interest in this problem originated with an undergraduate project for the NSF-funded PIC Math program.
go to Top