Show simple item record

dc.contributor.authorXiao, Y
dc.contributor.authorBeschastnikh, I
dc.contributor.authorRosenblum, DS
dc.contributor.authorSun, C
dc.contributor.authorElbaum, S
dc.contributor.authorLin, Y
dc.contributor.authorDong, JS
dc.date.accessioned2021-10-24T22:53:09Z
dc.date.available2021-10-24T22:53:09Z
dc.date.issued2021
dc.identifier.isbn9780738113197en_US
dc.identifier.issn0270-5257en_US
dc.identifier.doi10.1109/ICSE43902.2021.00044en_US
dc.identifier.urihttp://hdl.handle.net/10072/409423
dc.description.abstractThe widespread adoption of Deep Neural Networks (DNNs) in important domains raises questions about the trustworthiness of DNN outputs. Even a highly accurate DNN will make mistakes some of the time, and in settings like self-driving vehicles these mistakes must be quickly detected and properly dealt with in deployment. Just as our community has developed effective techniques and mechanisms to monitor and check programmed components, we believe it is now necessary to do the same for DNNs. In this paper we present DNN self-checking as a process by which internal DNN layer features are used to check DNN predictions. We detail SelfChecker, a self-checking system that monitors DNN outputs and triggers an alarm if the internal layer features of the model are inconsistent with the final prediction. SelfChecker also provides advice in the form of an alternative prediction. We evaluated SelfChecker on four popular image datasets and three DNN models and found that SelfChecker triggers correct alarms on 60.56% of wrong DNN predictions, and false alarms on 2.04% of correct DNN predictions. This is a substantial improvement over prior work (SelfOracle, Dissector, and ConfidNet). In experiments with self-driving car scenarios, SelfChecker triggers more correct alarms than SelfOracle for two DNN models (DAVE-2 and Chauffeur) with comparable false alarms. Our implementation is available as open source.en_US
dc.description.peerreviewedYesen_US
dc.languageEnglishen_US
dc.publisherIEEEen_US
dc.publisher.placePiscataway, NJ, United Statesen_US
dc.relation.ispartofconferencename2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE)en_US
dc.relation.ispartofconferencetitle2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE)en_US
dc.relation.ispartofdatefrom2021-05-22
dc.relation.ispartofdateto2021-05-30
dc.relation.ispartoflocationMadrid, Spainen_US
dc.relation.ispartofpagefrom372en_US
dc.relation.ispartofpageto384en_US
dc.subject.fieldofresearchSoftware engineeringen_US
dc.subject.fieldofresearchcode4612en_US
dc.titleSelf-checking deep neural networks in deploymenten_US
dc.typeConference outputen_US
dc.type.descriptionE1 - Conferencesen_US
dcterms.bibliographicCitationXiao, Y; Beschastnikh, I; Rosenblum, DS; Sun, C; Elbaum, S; Lin, Y; Dong, JS, Self-checking deep neural networks in deployment, Proceedings - International Conference on Software Engineering, 2021, pp. 372-384en_US
dc.date.updated2021-10-21T06:04:32Z
dc.description.versionAccepted Manuscript (AM)en_US
gro.rights.copyright© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
gro.hasfulltextFull Text
gro.griffith.authorDong, Jin-Song


Files in this item

This item appears in the following Collection(s)

  • Conference outputs
    Contains papers delivered by Griffith authors at national and international conferences.

Show simple item record