Accepted Papers

A Machine Learning Approach for Detecting and Classifying Jamming Attacks Against OFDM-based UAVs
Jered Pawlak, Yuchen Li, Joshua Price, Matthew Wright, Khair Al Shamaileh, Quamar Niyaz, and Vijay Devabhaktuni

In this paper, a machine learning (ML) approach is proposed to detect and classify jamming attacks on unmanned aerial vehicles (UAVs). Four attack types are implemented using software-defined radio (SDR); namely, barrage, single-tone, successive-pulse, and protocol-aware jamming. Each type is launched against a drone that uses orthogonal frequency division multiplexing (OFDM) communication to qualitatively analyze its impacts considering jamming range, complexity, and severity. Then, an SDR is utilized in proximity to the drone and in systematic testing scenarios to record the radiometric parameters before and after each attack is launched. Signal-to-noise ratio (SNR), energy threshold, and several OFDM parameters are exploited as features and fed to six ML algorithms to explore and enable autonomous jamming detection/classification. The algorithms are quantitatively evaluated with metrics including detection and false alarm rates to evaluate the received signals and facilitate efficient decision-making for improved reception integrity and reliability. The resulting ML approach detects and classifies jamming with an accuracy of 92.2% and a false-alarm rate of 1.35%.

Adversarial Attacks on Deep Learning-based Floor Classification and Indoor Localization
Mohini Patil, Xuyu Wang, Xiangyu Wang, and Shiwen Mao

With the great advances in location-based services (LBS), Wi-Fi localization has attracted great interest due to its ubiquitous availability in indoor environments. Deep neural network (DNN) is a powerful method to achieve high localization performance using Wi-Fi signals. However, DNN models are shown vulnerable to adversarial examples generated by introducing a subtle perturbation. In this paper, we propose adversarial deep learning for indoor localization system using Wi-Fi received signal strength indicator (RSSI). In particular, we study the impact of adversarial attacks on floor classification and location prediction with Wi-Fi RSSI. Three white-box attacks methods are examined, including fast gradient sign attack (FGSM), projected gradient descent (PGD), and momentum iterative method (MIM). We validate the performance of DNN-based floor classification and location prediction using a public dataset and show that the DNN models are highly vulnerable to the three white-box adversarial attacks.

Adversarial Classification of the Attacks on Smart Grids Using Game Theory and Deep Learning
Kian Hamedani, Lingjia Liu, Jithin Jagannath, and Yang (Cindy) Yi

Smart grids are vulnerable to cyber-attacks. This paper proposes a game-theoretic approach to evaluate the variations caused by an attacker on the power measurements. Adversaries can gain financial benefits through the manipulation of the meters of smart grids. On the other hand, there is a defender that tries to maintain the accuracy of the meters. A zero-sum game is used to model the interactions between the attacker and defender. In this paper, two different defenders are used and the effectiveness of each defender in different scenarios is evaluated. Multi-layer perceptrons (MLPs) and traditional state estimators are the two defenders that are studied in this paper. The utility of the defender is also investigated in adversary-aware and adversary-unaware situations. Our simulations suggest that the utility which is gained by the adversary drops significantly when the MLP is used as the defender. It will be shown that the utility of the defender is variant in different scenarios, based on the defender that is being used. In the end, we will show that this zero-sum game does not yield a pure strategy, and the mixed strategy of the game is calculated.

Adversarial Learning for Cross Layer Security
Hesham Mohammed and Dola Saha

Spectrum access in the next generation wireless networks will be congested, competitive, and vulnerable to malicious intents of strong adversaries. This compels us to rethink wireless security for a cross-layer solution addressing it as a joint problem for encryption and modulation. We propose a novel neural network generated cross-layer security algorithm where the trusted transmitter encodes a secret message using a shared secret key to generate a secured waveform. This encrypted waveform remains undeciphered by the adversary while the intended receiver can recover the secret. Cooperative learning is introduced to enable our trusted pair to defeat the adversary and learn the encryption and modulation jointly. The model can encode any modulation order and improves both reliability and secrecy capacity compared to prior work. Our results demonstrate that the trusted pair succeeds in achieving secure data transmission while the adversary can not decipher the received cipher data.

Efficient and Privacy-preserving Distributed Learning in Cloud-Edge Computing Systems
Yili Jiang, Kuan Zhang, Yi Qian, and Rose Qingyang Hu

Machine learning and cloud computing have been integrated in diverse applications to provide intelligent services. With powerful computational ability, the cloud server can execute machine learning algorithm efficiently. However, since accurate machine learning highly depends on training the model with sufficient data. Transmitting massive raw data from distributed devices to the cloud leads to heavy communication overhead and privacy leakage. Distributed learning is a promising technique to reduce data transmission by allowing the distributed devices to participant in model training locally. Thus a global learning task can be performed in a distributed way. Although it avoids to disclose the participants’ raw data to the cloud directly, the cloud can infer partial private information by analyzing their local models. To tackle this challenge, the state-of-the-art solutions mainly rely on encryption and differential privacy. In this paper, we propose to implement the distributed learning in a three-layer cloud-edge computing system. By applying the mini-batch gradient decent, we can decompose a learning task to distributed edge nodes and participants hierarchically. To improve the communication efficiency while preserving privacy, we employ secure aggregation protocol in small groups by utilizing the social network of participants. Simulation results are presented to show the effectiveness of our proposed scheme in terms of learning accuracy and efficiency.

Explainability-based Backdoor Attacks Against Graph Neural Networks
Jing Xu, Minhui (Jason) Xue, and Stjepan Picek

Backdoor attacks represent a serious threat to neural network models. A backdoored model will misclassify the trigger-embedded inputs into an attacker-chosen target label while performing normally on other benign inputs. There are already numerous works on backdoor attacks on neural networks, but only a few works consider graph neural networks (GNNs). As such, there is no intensive research on explaining the impact of trigger injecting position on the performance of backdoor attacks on GNNs.

To bridge this gap, we conduct an experimental investigation on the performance of backdoor attacks on GNNs. We apply two powerful GNN explainability approaches to select the optimal trigger injecting position to achieve two attacker objectives - high attack success rate and low clean accuracy drop. Our empirical results on benchmark datasets and state-of-the-art neural network models demonstrate the proposed method’s effectiveness in selecting trigger injecting position for backdoor attacks on GNNs. For instance, on the node classification task, the backdoor attack with trigger injecting position selected by GraphLIME reaches over 84% attack success rate with less than 2.5% accuracy drop.

Inaudible Manipulation of Voice-Enabled Devices Through BackDoor Using Robust Adversarial Audio Attacks: Invited Paper
Morriel Kasher, Michael Zhao, Aryeh Greenberg, Devin Gulati, Silvija Kokalj-Filipovic, and Predrag Spasojevic

The BackDoor system provides a method for inaudibly transmitting messages that are recorded by unmodified receiver microphones as if they were transmitted audibly. Adversarial Audio attacks allow for an audio sample to sound like one message but be transcribed by a speech processing neural network as a different message. This study investigates the potential applications of Adversarial Audio through the BackDoor system to manipulate voice-enabled devices, or VEDs, without detection by humans or other nearby microphones. We discreetly transmit voice commands by applying robust, noise-resistant adversarial audio perturbations through BackDoor on top of a predetermined speech or music base sample to achieve a desired target transcription. Our analysis compares differing base carriers, target phrases, and perturbation strengths for maximal effectiveness through BackDoor. We determined that such an attack is feasible and that the desired adversarial properties of the audio sample are maintained even when transmitted through BackDoor.

Intermittent Jamming against Telemetry and Telecommand of Satellite Systems and A Learning-driven Detection Strategy
Selen Gecgel and Gunes Karabulut Kurt

Towards sixth-generation networks (6G), satellite communication systems, especially based on Low Earth Orbit (LEO) networks, become promising due to their unique and comprehensive capabilities. These advantages are accompanied by a variety of challenges such as security vulnerabilities, management of hybrid systems, and high mobility. In this paper, firstly, a security deficiency in the physical layer is addressed with a conceptual framework, considering the cyber-physical nature of the satellite systems, highlighting the potential attacks. Secondly, a learning-driven detection scheme is proposed, and the lightweight convolutional neural network (CNN) is designed. The performance of the designed CNN architecture is compared with a prevalent machine learning algorithm, support vector machine (SVM). The results show that deficiency attacks against the satellite systems can be detected by employing the proposed scheme.

Learning Model for Cyber-attack Index Based Virtual Wireless Network Selection
Naveen Sapavath and Danda B. Rawat

With the availability of different wireless networks in wireless virtualization, dynamic network selection in a given heterogeneous environment is challenging task when there is cyber security and data privacy requirements for wireless users. Selection of low cyber risk network can result in good service experience to the users. Network selection in virtualized wireless environment is determined by various factors such as Quality of Experience (QoE), data loss prevention, security and privacy. In this paper, we propose a learning model for dynamic network selection based on cyber-attack index (CI) value of networks. We have develop a recommendation system which recommends user to select the most secure network with least CI value. A mathematical model based on least squares and convex optimization is presented which predicts the CI of network with goal of maximizing the number of wireless users/subscribers. Numerical results show that the CI based recommendation system outperforms the traditional prediction based systems. Furthermore, we compare our approach with existing approaches and found that the proposed approach results in better performance in terms maximizing the number of wireless users/subscribers and better services to them.

Low-cost Influence-Limiting Defense against Adversarial Machine Learning Attacks in Cooperative Spectrum Sensing
Zhengping Luo, Shangqing Zhao, Rui Duan, Zhuo Lu, Yalin E. Sagduyu, and Jie Xu

Cooperative spectrum sensing aims to improve the reliability of spectrum sensing by individual sensors for better utilization of the scarce spectrum bands, which gives the feasibility for secondary spectrum users to transmit their signals when primary users remain idle. However, there are various vulnerabilities experienced in cooperative spectrum sensing, especially when machine learning techniques are applied. The influence-limiting defense is proposed as a method to defend the data fusion center when a small number of spectrum sensing devices is controlled by an intelligent attacker to send erroneous sensing results. Nonetheless, this defense suffers from a computational complexity problem. In this paper, we propose a low-cost version of the influence-limiting defense and demonstrate that it can decrease the computation cost significantly (the time cost is reduced to less than 20% of the original defense) while still maintaining the same level of defense performance.

Machine Learning-Assisted Wireless PHY Key Generation with Reconfigurable Intelligent Surfaces
Long Jiao, Guohua Sun, Junqing Le, and Kai Zeng

The key generation rate (KGR) performance of wireless physical layer (PHY) key generation can be limited by the quasi-static slow fading environment. In this work, we aim to exploit the radio environment reconfiguration ability enabled by reconfigurable intelligent surface (RIS) to improve KGR of PHY key generation. By rapidly changing the RIS configurations, the randomness or entropy rate of the wireless channel can be significantly increased, thus improving the KGR. To achieve high KGR while keeping low bit disagreement ratio (BDR), for the first time, we propose a machine learning (ML) based adaptive quantization level prediction scheme to decide an optimal quantization level based on channel state information (CSI). Simulation results show that with a prediction accuracy as high as 98.2%, the proposed ML-based prediction model tends to assign high quantization levels in the high SNR regime to reduce BDR, while adopting low quantization levels under low SNRs to maintain a low BDR.

Multi-Agent Reinforcement Learning Approaches to RF Fingerprint Enhancement
Joseph Carmack, Steve Schmidt, and Scott Kuzdeba

Deep learning based RF Fingerprinting has shown great promise for IoT device security. This work explores various multi-agent reinforcement learning approaches to enable RF Fingerprint enhancement for an ensemble of transmitters. A RiftNetTM Reconstruction Model (RRM) is used to learn a latent Wi-Fi signal representation and how to reconstruct from that latent representation at the transmitter such that the reconstruction uniquely excites parts of the front-end to enhance the fingerprint. Deep reinforcement learning is then employed to learn the RRM control policy. Details on the design of the control interface, state representation, and rewards structure are presented for four different policy approaches. The resulting computational and security characteristics are discussed.

Poisoning Attack Anticipation in Mobile Crowdsensing: A Competitive Learning-Based Study
Alexandre Prud'Homme and Burak Kantarci

Mobile Crowdsensing is prone to adversarial attacks particularly the data injection attacks to mislead the servers in the decision-making process. This paper aims to tackle the problem of threat anticipation from the standpoint of data poisoning attacks, and aims to model various classifiers to model the behaviour of the adversaries in a Mobile Crowdsensing setting. To this end, we study and quantify the impact of competitive learning-based data poisoning in a Mobile Crowdsensing environment by considering a black-box attack through a self organizing map. Under various machine learning classifiers in the decision-making platforms, it has been shown that the accuracy of the crowdsensing platform decisions are prone to a decrease in the range of 18%-22% when an adversary pursues a competitive learning-based data poisoning attack on the crowdsensing platform. Furthermore, we also show the robustness of certain classifiers under increasing poisoned samples.

RiftNeXt™: Explainable Deep Neural RF Scene Classification
Steve Schmidt, James Stankowicz, Joseph Carmack, and Scott Kuzdeba

We propose a framework, RiftNeXtTM, to perform radio frequency (RF) scene context change detection and classification with Expert driven neural explainability. Our approach uses a deep learning based classifier to perform spectrum monitoring of Wi-Fi devices and usage patterns with an auxiliary classifier operating post-hoc to output human interpretable reasoning for classification declarations. The classification network operates on input spectrograms through a series of dilated causal convolution layers for feature extraction which are fed into classification layers. We have previously shown that dilated causal convolutions are well suited for RF applications, including RF fingerprinting, and extend their use here to new applications. The Explainability Module operates over an auxiliary dataset that is built based on domain expertise for learning how to reason over the classification network outputs. These two approaches, the deep learning classifier and Explainability Module are combined into a unique explainable deep learning approach that we apply to Wi-Fi spectrum monitoring. This paper provides results from this fused approach, leveraging the power of deep learning classification with user interpretable explainability.

SWIPEGAN: Swiping Data Augmentation Using Generative Adversarial Networks for Smartphone User Authentication
Attaullah Buriro, Francesco Ricci, and Bruno Crispo

Behavioral biometric-based smartphone user authentication schemes based on touch/swipe have shown to provide the desired usability. However, their accuracy is not yet considered up to the mark. This is primarily due to the lack of a sufficient number of training samples, e.g., swiping gestures1: users are reluctant to provide many. Consequently, the application of such authentication techniques in the real world is still limited.

To overcome the shortage of training samples and make behavioral biometric-based schemes more accurate, we propose the usage of Generative Adversarial Networks (GAN) for generating synthetic samples, in our case, or swiping gestures. GAN is an unsupervised approach for synthetic data generation and has already been used in a wide range of applications, such as image and video generation. However, their use in behavioral biometric-based user authentication schemes has not been explored yet. In this paper, we propose SWIPEGAN - to generate swiping samples to be used for smartphone user authentication. Extensive experimentation and evaluation show the quality of the generated synthetic swiping samples and their efficacy in increasing the accuracy of the authentication scheme.

Variational Leakage: The Role of Information Complexity in Privacy Leakage
Amir Ahooye Atashin, Behrooz Razeghi, Deniz Gündüz, and Slava Voloshynovskiy

We study the role of information complexity in privacy leakage about an attribute of an adversary’s interest, which is not known a priori to the system designer. Considering the supervised representation learning setup and using neural networks to parameterize the variational bounds of information quantities, we study the impact of the following factors on the amount of information leakage: information complexity regularizer weight, latent space dimension, the cardinalities of the known utility and unknown sensitive attribute sets, the correlation between utility and sensitive attributes, and a potential bias in a sensitive attribute of adversary’s interest. We conduct extensive experiments on Colored-MNIST and CelebA datasets to evaluate the effect of information complexity on the amount of intrinsic leakage.