We present a novel method for scalable and precise certification of deep neural networks. The key technical insight behind our approach is a new abstract domain which combines floating point polyhedra with intervals and is equipped with abstract transformers specifically tailored to the domain of neural networks. Concretely, we introduce new transformers for the affine transform, the rectified linear unit (ReLU) activation, sigmoid, tanh, and maxpool functions.
We implemented our method in a system called DeepPoly and evaluated it extensively on a range of datasets, neural architectures, and specifications. Our experimental results indicate that DeepPoly is more precise than prior work while scaling to large networks. We also show how to combine DeepPoly with abstraction refinement in order to prove, for the first time, robustness of the given input (e.g., an image) to complex perturbations such as rotations.
Wed 16 JanDisplayed time zone: Belfast change
| 15:21 - 16:27 | Machine Learning and Linear AlgebraResearch Papers at Sala I Chair(s): Aws Albarghouthi University of Wisconsin-Madison | ||
| 15:2122m Talk | code2vec: Learning Distributed Representations of Code Research Papers Uri Alon Technion, Meital Zilberstein Technion, Omer Levy University of Washington, USA, Eran Yahav TechnionLink to publication DOI Pre-print Media Attached File Attached | ||
| 15:4322m Talk | An Abstract Domain for Certifying Neural Networks Research PapersLink to publication DOI Media Attached | ||
| 16:0522m Talk | Closed Forms for Numerical Loops Research Papers Zachary Kincaid Princeton University, Jason Breck University of Wisconsin - Madison, John Cyphert University of Wisconsin - Madison, Thomas Reps University of Wisconsin - Madison and GrammaTech, Inc.Link to publication DOI Media Attached File Attached | ||



