Computing Reviews

Walling up backdoors in intrusion detection systems
Bachl M., Hartl A., Fabini J., Zseby T.  Big-DAMA 2019 (Proceedings of the 3rd ACM CoNEXT Workshop on Big DAta, Machine Learning and Artificial Intelligence for Data Communication Networks, Orlando, FL, Dec 9, 2019)8-13,2019.Type:Proceedings
Date Reviewed: 02/04/21

Intrusion detection systems (IDSs) detect anomalies in a network. However, backdoors instated for profit, political gain, and/or other reasons can be hard to detect and may introduce security vulnerabilities.

IDSs are generally built using multilayer perceptrons (MLPs). Much research is needed to detect backdoors in such a system. The authors’ research is based on two methods:

(1) An analysis of backdoors using visual tools and techniques; and
(2) An analysis of techniques like pruning and fine-tuning.

The authors theorize that, while such methods have been discussed for convolutional neural networks (CNNs), similar research is inadequate in detecting backdoors in traditional MLPs, decision trees (DTs), and random forests (RFs).

To design an experimental setup, the authors design an RF and MLP model with backdoors using PyTorch. They visualize the model using partial dependence plots (PDPs) and accumulated local effects (ALE) plots, and look for regions that suggest the behavior of the model under investigation (MuI) is unexplainable or counterintuitive. They also study the effect of techniques like pruning and fine-tuning on their model via design validation and test datasets.

With their experimentation, the authors recommend PDP and ALE plots to analyze questionable decisions. They show ways to detect unnecessary features hinting toward a backdoor. The authors also show that pruning and fine-tuning techniques popular with CNNs are inefficient at removing backdoors from their experimental MLP models. However, their experiments show that DTs and RFs can benefit from a validation set. The results show that validation sets as a pruning mechanism can reduce backdoor efficacy without reducing classifier detection effectiveness.

In summary, this paper provides good commentary on backdoors and highlights the need to devise new ways to detect them.

Reviewer:  Shyamkumar Iyer Review #: CR147178 (2107-0184)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy