Interpretable Decisions Trees via Human-in-the-Loop-Learning

Loading...
Thumbnail Image
File version

Accepted Manuscript (AM)

Author(s)
Estivill-Castro, Vladimir
Gilmore, Eugene
Hexel, Rene
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2022
Size
File type(s)
Location

Sydney, Australia

License
Abstract

Interactive machine learning (IML) enables models that incorporate human expertise because the human collaborates in the building of the learned model. Moreover, the expert driving the learning (human-in-the-loop-learning) can steer the learning objective, not only for accuracy, but perhaps for discrimination or characterisation rules, where isolating one class is the primary objective. Moreover, the interaction enables humans to explore and gain insights into the dataset, and to validate the learned models. This requires transparency and interpretable classifiers. The importance and fundamental relevance of understandable classification has recently been emphasised across numerous applications under the banner of explainable artificial intelligence. We use parallel coordinates to design an IML system that visualises decision trees with interpretable splits beyond plain parallel axis splits. Moreover, we show that discrimination and characterisation rules are also well communicated using parallel coordinates. We confirm the merits of our approach by reporting results from a large usability study.

Journal Title
Conference Title

Communications in Computer and Information Science

Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. This is the author-manuscript version of this paper. Reproduced in accordance with the copyright policy of the publisher.The original publication is available at www.springerlink.com

Item Access Status
Note
Access the data
Related item(s)
Subject

Machine learning

Knowledge representation and reasoning

Persistent link to this record
Citation

Estivill-Castro, V; Gilmore, E; Hexel, R, Interpretable Decisions Trees via Human-in-the-Loop-Learning, Communications in Computer and Information Science, 2022, pp. 115-130