Applied Math Seminar
Fall 2019
All talks are from 12:001:00 p.m. in the Seminar Room CH351, unless otherwise specified.

Sep06

Positive Definite Kernels  An Introduction for Machine Learning ApplicationsNicholas WoodUSNA, Math DepartmentTime: 12:00 PM
View Abstract
Positive definite kernels are advantageous for machine learning applications for at least two reasons. First, positive definite kernels make it possible to use linear methods to generate nonlinear decision boundaries, and second, they provide a general framework in which data of any form can be used, e.g. text data, image data, graphical data, etc. In this talk, I'll first give examples of machine learning applications that highlight these two advantages. Following these examples, I will motivate the definition for positive definite kernels by looking at the axioms of the inner product, showing how the former follows somewhat naturally from questions about the latter. We will then define positive definite kernels and discuss methods for proving (or perhaps disproving) that a particular kernel is positive definite. The culmination of the talk will be the use of these methods to prove that the Tanimoto Kernel is a positive definite kernel.

Jul30

RowStochastic Matrix Nth Roots by Iterated Geometric MeanGregory CoxsonUSNA, ECE DepartmentTime: 12:00 PM
View Abstract
Creditrating agencies sometimes employ Markov models for client transitions between credit ratings. The transition matrices employed in these models are rowstochastic. Given a transition matrix for a given transition period, one might need to compute a transition matrix for a shorter time period, requiring computation of a rowstochastic matrix Nth root. One option for computing Nth roots is the Iterated Geometric Mean (IGM) which generalizes the ArithmeticHarmonic Mean, equivalent to the Geometric Mean. Under certain conditions, the IGM can be applied to matrix arguments to yield a principal Nth root of a matrix of interest. While the precise conditions for existence of such roots remains an open area of research, there are subclasses of the rowstochastic matrices for which principal Nth roots are guaranteed to exist. The structure of the IGM is such that for a given rowstochastic matrix, the process will preserve unit row sums at every step. While this feature makes the IGM promising for computing Nth roots of rowstochastic matrices, there is another property we need in these Nth roots  nonnegativity. This talk will examine conditions for convergence for complex number arguments and for matrix arguments, as well as conditions for returning a nonnegative rowstochastic matrix Nth root. Thanks to my SEAP summer intern, Angeline Luther, for coding up an algorithm that computes the IGM for sets of matrix arguments of any given order. Interest in this problem originated with an undergraduate project for the NSFfunded PIC Math program.