Towards dependable and explainable machine learning using automated reasoning

Loading...
Thumbnail Image
File version
Accepted Manuscript (AM)
Author(s)
Bride, Hadrien
Dong, Jie
Dong, jin
Hou, zhe
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2018
Size
File type(s)
Location
License
Abstract

The ability to learn from past experience and improve in the future, as well as the ability to reason about the context of problems and extrapolate information from what is known, are two important aspects of Artificial Intelligence. In this paper, we introduce a novel automated reasoning based approach that can extract valuable insights from classification and prediction models obtained via machine learning. A major benefit of the proposed approach is that the user can understand the reason behind the decision-making of machine learning models. This is often as important as good performance. Our technique can also be used to reinforce user-specified requirements in the model as well as to improve the classification and prediction.

Journal Title
Conference Title
Lecture Notes in Computer Science
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
© Springer Nature Switzerland AG 2018. This is the author-manuscript version of this paper. Reproduced in accordance with the copyright policy of the publisher.The original publication is available at www.springerlink.com
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Electrical engineering
Electronics, sensors and digital hardware
Information and computing sciences
Persistent link to this record
Citation