Better Wireless Error Models Using Neural Networks
Note
This is a Proposed Research Topic. Proposed research topics are ideas that we find both very promising as a research topic, and practically very useful. We have already spent some time trying out the idea and proven (at least to ourselves) that it is feasible and the approach outlined here can be made to work, but we don't have the resources (mostly, time) to elaborate it in-house. If you are a researcher (e.g. PhD student) looking for an exciting and rewarding topic to work on, we'd love to hear from you!
Abstract¶
In wireless network simulations, an error model describes how the signal to noise ratio (SNR) affects the amount of errors at the receiver. Given that SNR is a multi-dimensional function over time and frequency, and given the diversity of coding and modulation schemes used in wireless networks, writing a good and efficient error model is a very difficult problem. Existing error models are often closed formulas derived from empirical observations, and in addition to being very limited in scope, they sometimes fall down even within their supposed limitations. In contrast, we propose using neural networks and deep learning techniques to produce an error model that, in addition to being universally applicable, can produce reasonable answers even for cases that it was not explicitly trained for.
Background¶
With the recent INET 4.2 release comes a new and versatile radio signal analog domain representation. The new representation is capable of handling all kinds of radio signals such as OFDM, FHSS, UWB, chirp, and so on. It also allows mixing wireless technologies arbitrarily. The API is flexible in terms of representation composition, and it also allows arbitrary extensions to be combined with existing representations.
The next step would be to come up with better wireless error models, because the current ones don't take advantage of the capabilities of the new representation. Error models are responsible for determining whether a packet has been received correctly, from the complete description of the physical signal and interference characteristics.
All current wireless error models solely rely on the minimum (or average) SNIR in order to determine the packet error rate. In both cases, the SNIR is calculated over the whole received signal both in time and frequency. Using the minimum SNIR as input often yields completely incorrect results (e.g. a short strong interfering spike ruins the minimum SNIR regardless of its duration). Moreover, the current WiFi error models are not even up-to-date for the latest versions of the IEEE 802.11 standard, so the highest datarates are not supported.
The new signal representation allows the error models to look at the SNIR for each physical symbol in every subcarrier and modulation. Doing so would not only provide more accurate error models for a particular wireless model, but it would also allow more accurate crosstalk and coexistence simulations.
Proposal¶
INET could have wireless error models that use neural networks and deep learning to determine the packet error rate based on hundreds or even thousands of signal parameters for every reception quite efficiently. For example, the error model could calculate the packet error rate from the minimum/average SNIR parameters for each and every symbol for every subcarrier of the received signal. The deep learning method would be used as a curve fitting technique for the packet error rate function to capture the essence of the utilized modulation, scrambling, interleaving, forward error correction, etc. of the particular wireless technology. The data to teach the neural network could be generated by simulating the reception of many signals doing symbol/bit level simulation. That is, actually determining how every symbol is affected and decoding the signal. Of course, this is a very time consuming process, but it can be done in parallel (e.g. using cloud services) and it only has to be done once for a particular wireless technology. This data could also be generated by Matlab or other 3rd party tools, but luckily INET already contains many algorithms for symbol/bit level simulation. The generated data would contain the thousands of signal parameters and the reception outcome. This data could then be fed into another cloud service which teaches a neural network using deep learning and produces the parameters for the neural network to be used in the actual wireless simulations. We expect runtime performance to be good, because running a neural network is several magnitudes faster than teaching the network.
Progress¶
Early experiments show that this is a very promising direction. After some training, our neural network performed better than some well-known error models already in INET. It is also clear from the early experiments that a researcher with more experience in neural networks than ourselves could achieve much more than we could, and the end result would be practical to be used in INET simulations.
Our code can be found in the topic/neuralnetworkerrormodel
branch of the
INET repository.