Security of Machine Learning-Based Anomaly Detection in Cyber Physical Systems

Loading...
Thumbnail Image
File version

Accepted Manuscript (AM)

Author(s)
Jadidi, Zahra
Pal, Shantanu
Nayak, Nithesh K
Selvakkumar, Arawinkumaar
Chang, Chih-Chia
Beheshti, Maedeh
Jolfaei, Alireza
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2022
Size
File type(s)
Location

Honolulu, USA

License
Abstract

With the emergence of the Internet of Things (IoT) and Artificial Intelligence (AI) services and applications in the Cyber Physical Systems (CPS), the methods of protecting CPS against cyber threats is becoming more and more challenging. Various security solutions are implemented to protect CPS networks from cyber attacks. For instance, Machine Learning (ML) methods have been deployed to automate the process of anomaly detection in CPS environments. The core of ML is deep learning. However, it has been found that deep learning is vulnerable to adversarial attacks. Attackers can launch the attack by applying perturbations to input samples to mislead the model, which results in incorrect predictions and low accuracy. For example, the Fast Gradient Sign Method (FGSM) is a white-box attack that calculates gradient descent oppositely to maximize the loss and generates perturbations by adding the gradient to unpolluted data. In this study, we focus on the impact of adversarial attacks on deep learning-based anomaly detection in CPS networks and implement a mitigation approach against the attack by retraining models using adversarial samples. We use the Bot-IoT and Modbus IoT datasets to represent the two CPS networks. We train deep learning models and generate adversarial samples using these datasets. These datasets are captured from IoT and Industrial IoT (IIoT) networks. They both provide samples of normal and attack activities. The deep learning model trained with these datasets showed high accuracy in detecting attacks. An Artificial Neural Network (ANN) is adopted with one input layer, four intermediate layers, and one output layer. The output layer has two nodes representing the binary classification results. To generate adversarial samples for the experiment, we used a function called the 'fast_gradient_method’ from the Cleverhans library. The experimental result demonstrates the influence of FGSM adversarial samples on the accuracy of the predictions and proves the effectiveness of using the retrained model to defend against adversarial attacks.

Journal Title
Conference Title

2022 31st International Conference on Computer Communications and Networks (ICCCN)

Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement

© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Item Access Status
Note
Access the data
Related item(s)
Subject

Artificial intelligence

Attacks

Computer Science

Computer Science, Hardware & Architecture

Computer Science, Information Systems

Computer Science, Theory & Methods

Persistent link to this record
Citation

Jadidi, Z; Pal, S; Nayak, NK; Selvakkumar, A; Chang, C-C; Beheshti, M; Jolfaei, A, Security of Machine Learning-Based Anomaly Detection in Cyber Physical Systems, 2022 31st International Conference on Computer Communications and Networks (ICCCN), 2022