AI/ML with Signal Processing
The expansion of wireless technologies globally has created new challenges for non-cooperative RF-based systems designed for signal detection, exploitation, and geolocation. With the prevalence of user-selectable modes of operation, including customization of frequency channels, frequency bands, and data rates, the task of detecting and distinguishing the multitude of signals has become increasingly complex and resource intensive. To make matters worse we have seen competing manufacturers reuse the same underlying technologies and market it as their own, creating further ambiguities in the detection and identification of specific targets. Our goal was to explore the intersection of AI/ML with signal processing to determine whether neural networks can offer improvements in processing efficiency, accuracy, and scalability – particularly when compared to traditional signal processing techniques.
The evaluation of six FSK signals with baud rates ranging from 1.2k to 9.6k was conducted. Out of these, two signals had a common baud rate but differed in their modulation index. Signals with the identical baud rates can be difficult to differentiate with efficient signal processing techniques such as cyclostationary analysis, which exploits the periodic statistical properties of the underlying data. The selected signal types serve as representative examples of various configurations that a commercial device may offer for user customization.
Training data was synthesized with random variations in the signal-to-noise (SNR) ratio from 3.0 to 30.0 dB to simulate real-world collection scenarios. Small frequency offsets were added to account for Doppler shifts and local oscillator imperfections.
The data was preprocessed by calculating the power spectral density (PSD) using an 8192 point FFT, before feeding it into the machine learning models. This transformation allowed a traditional signal detection problem to be treated as an image recognition problem – a task that machine learning algorithms are extremely proficient at. A total of 3000 examples were synthesized with 60% used for training, 20% for validation, and the remaining 20% for test.
Three machine learning architectures were evaluated: Softmax Regression (multinomial logistic regression), Deep Neural Network, and Convolutional Neural Network (CNN). The optimization of the Softmax model was performed using the Stochastic Gradient Descent (SGD) algorithm, while the Neural Network and CNN were optimized using the Adam algorithm. The Softmax Regression model is an extension of linear regression and is commonly used for classification problems. A Neural Network is more complex and can learn more features in the data through nonlinear transformations. As such, it was instructive to see whether the Neural Network could provide additional detection performance over the Softmax Regression model. The Softmax Regression was implemented using the scikit-learn library, and the Neural Network and CNN were implemented using PyTorch.
The Softmax and Neural Network models showed remarkable accuracy classifying the test data set of 600 examples. The Softmax model achieved a correct classification rate of 93.5%, while the Neural Network classified all six signal types with perfect accuracy. The wide range of Signal-to-Noise Ratio (SNR) levels in the data, from 3.0 dB to 30.0 dB, adds to the impressiveness of the models' performance. Both models learned the underlying patterns and were able to generalize effectively to new data examples. All the examples fed into the models were near the same frequency offset. The example in the following section will explore how the models perform when the signals appear at random frequency offsets.
Generalization With Frequency Offsets
Real-world devices such as WiFi, 4G/5G cellular, Bluetooth, Zigbee, push-to-talk radios, and many satellite communications systems do not use fixed frequencies. The ML models in the previous example performed incredibly well at recognizing different signal types when the signals were operating at the same frequency. However, the performance of the models dropped significantly (below 30% accuracy) when presented with new data consisting of the same signals but with arbitrary frequency offsets.
It is not surprising that the Softmax and Neural Network models did not generalize to the new input data. Neither model was trained on data with frequency offsets (i.e., non-centered data). Neural networks are very good at detecting patterns they have been trained on. Even though the fundamental information in the signal has not changed, the Neural Network did not learn how to identify those variations.
The accuracy of signal detection was improved to 62% by training the Neural Network with data featuring random frequency offsets. However, it was still far from the 100% accuracy achieved in the previous example. The ability for a model to perform well on data at different positions or offsets is known as translation invariance. It turns out that fully connected Neural Networks are not well suited for this task. The next example will explore the use of Convolutional Neural Networks (CNNs). CNNs are powerful deep learning models used for image recognition and can be extended for signal detection.
Convolutional Neural Networks
A CNN trained on centered data was able to achieve 92% accuracy detecting test examples with arbitrary frequency offsets. This is a significant improvement over the Neural Network and Softmax models.
While 92% accuracy is very good, there is room for improvement. There are several avenues we can explore to enhance the efficiency of the ML algorithm, such as increasing the amount of training data, adding more layers to increase model complexity, tuning hyperparameters, or incorporating new feature vectors.
Before diving into these optimization efforts, it is instructive to gain insight into the reasons for the model’s current limitations. To do this, we will first examine a few examples of test samples that were incorrectly predicted by the model. We will inspect the Power Spectral Density (PSD) plots to determine if there are shared characteristics among the missed predictions. Visual inspection shows that a significant number of missed predictions occur when the signal is at the band edge, resulting in partial or significant loss of the signal energy due to filtering.
Another way to build intuition into the model performance is to plot the histogram of prediction of the test data as a function of frequency offset. We can quickly see below that most of the incorrect predictions, labeled as “Missed” in the plots, are at the passband edges. This makes a lot of sense. The CNN model was only trained on examples that contained all of the signal energy, so it never had a chance to learn the properties of the signal when part of the energy is filtered out.
To further improve the signal detection capability of the CNN, the model can be trained with data examples at the band edge. Doing so results in a model that achieves 99.67% accuracy on the test data set consisting of 600 examples with random SNRs and frequency offsets! Developing intuition on the ML model performance can both save time and result in the best performing model.
Neural networks are powerful tools that can be applied to signal detection and identification problems with high fidelity. On top of their impressive accuracy, the processing time associated with prediction (or inference) is equally as impressive. The time to generate a prediction on 600 test data sets in the previous examples took less than 2.0 seconds total on an Intel Xeon Silver 4314 CPU @ 2.4GHz with 64GB of memory. This level of accuracy and speed represents a major step forward compared to traditional signal processing methods. Neural networks are also incredibly versatile. With minimal effort, the model was trained to differentiate between various modulation types such as FSK, BPSK, QPSK, and 8-PSK. This is just scratching the surface of how ML models can improve the performance of RF-based signal processing systems.