Breaking Neural Networks in Common Applications

Midshipman Researcher(s): 1/C Harrison Foley

Adviser(s): Dr. Gavin Taylor

Poster #27

As neural networks are deployed to solve a wide variety of problems, it becomes increasingly important to understand what can cause them to fail. The goal of our project is to cause neural networks to perform poorly via adversarial methods that are more destructive than previous state-of-the-art approaches. Specifically, we have drastically improved adversarial attacks on images of faces in order to avoid detection by facial recognition, and we have carried out the first successful data-poisoning attacks for reinforcement learning.

Full Size Computer Science #27