Repairing Failure-inducing Inputs with Input Reflection

Loading...
Thumbnail Image
File version

Version of Record (VoR)

Author(s)
Xiao, Y
Lin, Y
Beschastnikh, I
Sun, C
Rosenblum, D
Dong, JS
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2022
Size
File type(s)
Location

Rochester, USA

Abstract

Trained with a sufficiently large training and testing dataset, Deep Neural Networks (DNNs) are expected to generalize. However, inputs may deviate from the training dataset distribution in real deployments. This is a fundamental issue with using a finite dataset, which may lead deployed DNNs to mis-predict in production. Inspired by input-debugging techniques for traditional software systems, we propose a runtime approach to identify and fix failure-inducing inputs in deep learning systems. Specifically, our approach targets DNN mis-predictions caused by unexpected (deviating and out-of-distribution) runtime inputs. Our approach has two steps. First, it recognizes and distinguishes deviating ("unseen"semantically-preserving) and out-of-distribution inputs from in-distribution inputs. Second, our approach fixes the failure-inducing inputs by transforming them into inputs from the training set that have similar semantics. We call this process input reflection and formulate it as a search problem over the embedding space on the training set. We implemented a tool called InputReflector based on the above two-step approach and evaluated it with experiments on three DNN models trained on CIFAR-10, MNIST, and FMNIST image datasets. The results show that InputReflector can effectively distinguish deviating inputs that retain semantics of the distribution (e.g., zoomed images) and out-of-distribution inputs from in-distribution inputs. InputReflector repairs deviating inputs and achieves 30.78% accuracy improvement over original models. We also illustrate how InputReflector can be used to evaluate tests generated by deep learning testing tools.

Journal Title
Conference Title

ASE '22: Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering

Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement

© 2022 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution International 4.0 License.

Item Access Status
Note
Access the data
Related item(s)
Subject

Neural networks

Data engineering and data science

Persistent link to this record
Citation

Xiao, Y; Lin, Y; Beschastnikh, I; Sun, C; Rosenblum, D; Dong, JS, Repairing Failure-inducing Inputs with Input Reflection, ASE '22: Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering, 2022, pp. 85