More Interpretable Decision Trees

No Thumbnail Available
File version
Author(s)
Gilmore, Eugene
Estivill-Castro, Vladimir
Hexel, Rene
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2021
Size
File type(s)
Location

Bilbao, Spain

License
Abstract

We present a new Decision Tree Classifier (DTC) induction algorithm that produces vastly more interpretable trees in many situations. These understandable trees are highly relevant for explainable artificial intelligence, fair automatic classification, and human-in-the-loop learning systems. Our method is an improvement over the Nested Cavities (NC) algorithm. That is, we profit from the parallel-coordinates visualisation of high dimensional datasets. However, we build a hybrid with other decision tree heuristics to generate node-expanding splits. The rules in the DTCs learnt using our algorithm have a straightforward representation and, thus, are readily understood by a human user, even though our algorithm constructs rules whose nodes can involve multiple attributes. We compare our algorithm to the well-known decision tree induction algorithm C4.5, and find that our methods produce similar accuracy with significantly smaller trees. When coupled with a human-in-the-loop-learning (HILL) system, our approach can be highly effective for inferring understandable patterns in datasets.

Journal Title
Conference Title

Lecture Notes in Computer Science

Book Title
Edition
Volume

12886

Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject

Artificial intelligence

Interpretability

Classification rule mining

Persistent link to this record
Citation

Gilmore, E; Estivill-Castro, V; Hexel, R, More Interpretable Decision Trees, Lecture Notes in Computer Science, 2021, 12886, pp. 280-292