Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed Performance on Finite Sets of Examples

Source Not Available

3rd Workshop on Formal Methods for ML-Enabled Autonomous Systems, Computer-Aided Verification


Abstract

There is great interest in the potential for using formal methods to guarantee the reliability of deep neural networks. However, these techniques may also be used to implant carefully selected input-output pairs. We present initial results on a novel technique using SMT solvers to _ne tune the weights of a ReLU neural network to guarantee outcomes on a finite set of particular examples. This procedure can be used to ensure performance on key examples, but it could also be used to insert difficult-to-find incorrect examples that trigger unexpected performance. We demonstrate this approach by tuning the MNIST network to incorrectly classify a particular image and discuss the potential for the approach to compromise reliability of freely-shared machine learning models.

Citation

Contact Us


Chief
Ashley Llorens
Ashley.Llorens@jhuapl.edu
240-228-0312

Physical Address
7701 Montpelier Road
Laurel, MD 20723


The Intelligent Systems Center is located at the Montpelier Campus of the Johns Hopkins Applied Physics Laboratory.
Click here for a map, directions and other visitor information.