Wrong, Strong, and Silent: What Happens when Automated Systems With High Autonomy and High Authority Misbehave?
File version
Author(s)
Woods, DD
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
License
Abstract
Warnings about the risks of literal-minded automation—a system that can’t tell if its model of the world is the world it is actually in—have been sounded for over 70 years. The risk is that a system will do the “right” thing—its actions are appropriate given its model of the world, but it is actually in a different world—producing unexpected/unintended behavior and potentially harmful effects. This risk—wrong, strong, and silent automation—looms larger today as our ability to deploy increasingly autonomous systems and delegate greater authority to such systems expands. It already produces incidents, outages of valued services, financial losses, and fatal accidents across different settings. This paper explores this general and out-of-control risk by examining a pair of fatal aviation accidents which revolved around wrong, strong and silent automation.
Journal Title
Journal of Cognitive Engineering and Decision Making
Conference Title
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
This publication has been entered in Griffith Research Online as an advance online version.
Access the data
Related item(s)
Subject
Persistent link to this record
Citation
Dekker, SWA; Woods, DD, Wrong, Strong, and Silent: What Happens when Automated Systems With High Autonomy and High Authority Misbehave?, Journal of Cognitive Engineering and Decision Making, 2024