Applied Math Seminar
All talks are from 12:00-1:00 p.m. in the Seminar Room CH351, unless otherwise specified.
Positive Definite Kernels - An Introduction for Machine Learning ApplicationsNicholas WoodUSNA, Math DepartmentTime: 12:00 PMPositive definite kernels are advantageous for machine learning applications for at least two reasons. First, positive definite kernels make it possible to use linear methods to generate non-linear decision boundaries, and second, they provide a general framework in which data of any form can be used, e.g. text data, image data, graphical data, etc. In this talk, I'll first give examples of machine learning applications that highlight these two advantages. Following these examples, I will motivate the definition for positive definite kernels by looking at the axioms of the inner product, showing how the former follows somewhat naturally from questions about the latter. We will then define positive definite kernels and discuss methods for proving (or perhaps disproving) that a particular kernel is positive definite. The culmination of the talk will be the use of these methods to prove that the Tanimoto Kernel is a positive definite kernel.
Row-Stochastic Matrix Nth Roots by Iterated Geometric MeanGregory CoxsonUSNA, ECE DepartmentTime: 12:00 PMCredit-rating agencies sometimes employ Markov models for client transitions between credit ratings. The transition matrices employed in these models are row-stochastic. Given a transition matrix for a given transition period, one might need to compute a transition matrix for a shorter time period, requiring computation of a row-stochastic matrix Nth root. One option for computing Nth roots is the Iterated Geometric Mean (IGM) which generalizes the Arithmetic-Harmonic Mean, equivalent to the Geometric Mean. Under certain conditions, the IGM can be applied to matrix arguments to yield a principal Nth root of a matrix of interest. While the precise conditions for existence of such roots remains an open area of research, there are subclasses of the row-stochastic matrices for which principal Nth roots are guaranteed to exist. The structure of the IGM is such that for a given row-stochastic matrix, the process will preserve unit row sums at every step. While this feature makes the IGM promising for computing Nth roots of row-stochastic matrices, there is another property we need in these Nth roots -- non-negativity. This talk will examine conditions for convergence for complex number arguments and for matrix arguments, as well as conditions for returning a non-negative row-stochastic matrix Nth root. Thanks to my SEAP summer intern, Angeline Luther, for coding up an algorithm that computes the IGM for sets of matrix arguments of any given order. Interest in this problem originated with an undergraduate project for the NSF-funded PIC Math program.