We present a novel method for scalable and precise certification of deep neural networks. The key technical insight behind our approach is a new abstract domain which combines floating point polyhedra with intervals and is equipped with abstract transformers specifically tailored to the domain of neural networks. Concretely, we introduce new transformers for the affine transform, the rectified linear unit (ReLU) activation, sigmoid, tanh, and maxpool functions.
We implemented our method in a system called DeepPoly and evaluated it extensively on a range of datasets, neural architectures, and specifications. Our experimental results indicate that DeepPoly is more precise than prior work while scaling to large networks. We also show how to combine DeepPoly with abstraction refinement in order to prove, for the first time, robustness of the given input (e.g., an image) to complex perturbations such as rotations.
Wed 16 JanDisplayed time zone: Belfast change
15:21 - 16:27 | Machine Learning and Linear AlgebraResearch Papers at Sala I Chair(s): Aws Albarghouthi University of Wisconsin-Madison | ||
15:21 22mTalk | code2vec: Learning Distributed Representations of Code Research Papers Uri Alon Technion, Meital Zilberstein Technion, Omer Levy University of Washington, USA, Eran Yahav Technion Link to publication DOI Pre-print Media Attached File Attached | ||
15:43 22mTalk | An Abstract Domain for Certifying Neural Networks Research Papers Link to publication DOI Media Attached | ||
16:05 22mTalk | Closed Forms for Numerical Loops Research Papers Zachary Kincaid Princeton University, Jason Breck University of Wisconsin - Madison, John Cyphert University of Wisconsin - Madison, Thomas Reps University of Wisconsin - Madison and GrammaTech, Inc. Link to publication DOI Media Attached File Attached |