BadRobot: Jailbreaking Embodied LLM Agents in the Physical World
File version
Version of Record (VoR)
Author(s)
Zhu, C
Wang, X
Zhou, Z
Yin, C
Li, M
Xue, L
Wang, Y
Hu, S
Liu, A
Guo, P
Zhang, LY
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Singapore, Singapore
Abstract
Embodied AI represents systems where AI is integrated into physical entities. Multimodal Large Language Model (LLM), which exhibits powerful language understanding abilities, has been extensively employed in embodied AI by facilitating sophisticated task planning. However, a critical safety issue remains overlooked: could these embodied LLMs perpetrate harmful behaviors? In response, we introduce BadRobot, the first attack paradigm designed to jailbreak robotic manipulation, making embodied LLMs violate safety and ethical constraints through typical voice-based user-system interactions. Specifically, three vulnerabilities are exploited to achieve this type of attack: (i) manipulation of LLMs within robotic systems, (ii) misalignment between linguistic outputs and physical actions, and (iii) unintentional hazardous behaviors caused by world knowledge's flaws. Furthermore, we construct a benchmark of various malicious physical action queries to evaluate BadRobot's attack performance. Based on this benchmark, extensive experiments against existing prominent embodied LLM frameworks (e.g., Voxposer, Code as Policies, and ProgPrompt) demonstrate the effectiveness of our BadRobot. We emphasize that addressing this emerging vulnerability is crucial for the secure deployment of LLMs in robotics.
Journal Title
Conference Title
ICLR 2025: The Thirteenth International Conference on Learning Representations
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
DOI
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
This publication is distributed under the Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/4.0/).
Item Access Status
Note
Copyright permissions for this publication were identified from the publisher's website at https://openreview.net/forum?id=ei3qCntB66
Access the data
Related item(s)
Subject
Persistent link to this record
Citation
Zhang, H; Zhu, C; Wang, X; Zhou, Z; Yin, C; Li, M; Xue, L; Wang, Y; Hu, S; Liu, A; Guo, P; Zhang, LY, BadRobot: Jailbreaking Embodied LLM Agents in the Physical World, ICLR 2025: The Thirteenth International Conference on Learning Representations, 2025