Applied Math Seminar
Fall 2019
All talks are from 12:001:00 p.m. in the Seminar Room CH351, unless otherwise specified.

Nov19

Fairness Among Strangers: Randomness in Public Goods GamesAndrew BelmonteDept of Mathematics / Center for Mathematical Biology Huck Institutes of the Life Sciences Pennsylvania State University
View Abstract
Evolutionary game theory introduces time into game theory in a natural way, modeling the dynamics of strategic choices and mutually competitive interactions using differential equations. Agentbased models can provide stochastic versions of this evolution while providing additional flexibility, although mathematical analysis can become more difficult. In this talk I will discuss the tension between individual, selfish motivation and group dynamics  social dilemmas  using both evolutionary game theory and agentbased models to address the interplay of freeloading off of others, the division of labor, and contributing to the public good.

Nov01

Datadriven modeling of geophysical turbulence using an integrated statistical physicsdynamical systems approachPedram HassanzadehRice University
View Abstract
Datadriven modeling of geophysical turbulence, mainly motivated by problems in weather/climate prediction, has been of great interest at least since the 1970s. Fluctuationdissipation theorem (FDT), a powerful tool from statistical physics, has been particularly pursued as a mean of finding linear response functions (LRFs) for atmospheric/oceanic turbulence. However, while the calculated LRFs are often found to be accurate for lowdimensional toy models, they are not accurate for higher dimensional systems such as twolayer quasigeostrophic (QG) models or general circulation models (GCMs). In earlier work (Hassanzadeh & Kuang, 2016 J. Atmos. Sci.), we showed that a major source of inaccuracy is a step aimed at regularizing illconditioned covariance matrices by truncating the data into the leading modes from proper orthogonal decomposition (POD), a.k.a empirical orthogonal functions (EOFs). We found that the error arises from using POD/EOF modes, which are orthonormal, for systems that have nonnormal LRFs. I will present results from Khodkar & Hassanzadeh (2018 J. Fluid Mech.) and our more recent work in which we show the advantage of truncating data onto modes that are obtained from datadriven approximations of the Koopman operator such as dynamic mode decomposition (DMD) and timedelayed DMD. I will show how this approach substantially improves the accuracy of the computed LRFs for turbulent RayleighBenard convection, QG, and an atmospheric GCM.

Oct25

Multigrid preconditioning for PDEConstrained optimization: two new applicationsAndrei DraganescuUniversity of Maryland, Baltimore County
View Abstract
We present two new applications of a multigrid preconditioning technique that was originally developed for certain classes of inverse problems, and then applied successfully to optimal control of partial differential equations. The first part of the talk will focus on optimal control problems constrained by elliptic equations with stochastic coefficients. Assuming a generalized polynomial chaos expansion for the stochastic components, our approach uses a stochastic Galerkin finite element discretization for the PDE, thus leading to a discrete optimization problem. The key aspect is solving the potentially very large linear systems arising when solving the system representing the firstorder optimality conditions. We show that the multilevel preconditioning technique from the optimal control of deterministic elliptic PDEs has a natural extension to the stochastic case, and exhibits a similar optimal behavior with respect to the mesh size, namely the quality of the preconditioner increases with decreasing meshsize at the optimal rate. Moreover, under certain assumptions, we show that the quality is robust also with respect the two additional parameters that influence the dimension of the problem radically: polynomial degree and stochastic dimension. In the second part of the talk we apply a similar technique to an optimizationbased nonoverlapping domain decomposition method for elliptic partial differential equations developed by Gunzburger, Heinkenschloss, and Lee (2000). While it is not surprising that, for a fixed partition in subdomains, the preconditioner leads to the expected behavior of increasing quality (lower number of iterations) as the resolution increases, it is remarkable that the quality of the preconditioner is relatively robust with respect to the number and configuration of subdomains.

Oct10

Optimal data acquisition for inverse problems under model uncertainty, with application to subsurface flowAlen AlexanderianNorth Carolina State University
View Abstract
We consider inverse problems that seek to infer an infinitedimensional parameter from measurement data observed at a set of sensor locations and from the governing PDEs. We focus on the problem of optimal placement of sensors that result in minimized uncertainty in the inferred parameter field. This can be formulated as an optimal experimental design problem. We present a method for computing optimal sensor placements for Bayesian linear inverse problems governed by PDEs with model uncertainties. Specifically, given a statistical distribution for the model uncertainties we seek to find sensor placements that minimize the expected value of the posterior covariance trace; i.e., the expected value of the Aoptimal criterion. The expected value is approximated using Monte Carlo leading to an objective function consisting of a finite sum of trace operators and a sparsity inducing penalty. Minimization of this objective requires many PDE solves in each step, making the problem extremely challenging. We will discuss strategies for making the problem computationally tractable. These include reduced order modeling and exploiting lowdimensionality of the measurements, in the problems we target. We present numerical results for inference of the initial condition in a subsurface flow problem with inherent uncertainty in the velocity field.

Sep06

Positive Definite Kernels  An Introduction for Machine Learning ApplicationsNicholas WoodUSNA, Math DepartmentTime: 12:00 PM
View Abstract
Positive definite kernels are advantageous for machine learning applications for at least two reasons. First, positive definite kernels make it possible to use linear methods to generate nonlinear decision boundaries, and second, they provide a general framework in which data of any form can be used, e.g. text data, image data, graphical data, etc. In this talk, I'll first give examples of machine learning applications that highlight these two advantages. Following these examples, I will motivate the definition for positive definite kernels by looking at the axioms of the inner product, showing how the former follows somewhat naturally from questions about the latter. We will then define positive definite kernels and discuss methods for proving (or perhaps disproving) that a particular kernel is positive definite. The culmination of the talk will be the use of these methods to prove that the Tanimoto Kernel is a positive definite kernel.

Aug30

RowStochastic Matrix Nth Roots by Iterated Geometric MeanGregory CoxsonUSNA, ECE DepartmentTime: 12:00 PM
View Abstract
Creditrating agencies sometimes employ Markov models for client transitions between credit ratings. The transition matrices employed in these models are rowstochastic. Given a transition matrix for a given transition period, one might need to compute a transition matrix for a shorter time period, requiring computation of a rowstochastic matrix Nth root. One option for computing Nth roots is the Iterated Geometric Mean (IGM) which generalizes the ArithmeticHarmonic Mean, equivalent to the Geometric Mean. Under certain conditions, the IGM can be applied to matrix arguments to yield a principal Nth root of a matrix of interest. While the precise conditions for existence of such roots remains an open area of research, there are subclasses of the rowstochastic matrices for which principal Nth roots are guaranteed to exist. The structure of the IGM is such that for a given rowstochastic matrix, the process will preserve unit row sums at every step. While this feature makes the IGM promising for computing Nth roots of rowstochastic matrices, there is another property we need in these Nth roots  nonnegativity. This talk will examine conditions for convergence for complex number arguments and for matrix arguments, as well as conditions for returning a nonnegative rowstochastic matrix Nth root. Thanks to my SEAP summer intern, Angeline Luther, for coding up an algorithm that computes the IGM for sets of matrix arguments of any given order. Interest in this problem originated with an undergraduate project for the NSFfunded PIC Math program.