Deep Learning at the Physical Layer: System Challenges and Applications to 5G and Beyond Francesco Restuccia, Member, IEEE, and Tommaso Melodia, Fellow, IEEE
Total Page:16
File Type:pdf, Size:1020Kb
1 Deep Learning at the Physical Layer: System Challenges and Applications to 5G and Beyond Francesco Restuccia, Member, IEEE, and Tommaso Melodia, Fellow, IEEE Abstract—The unprecedented requirements of the Internet of Things (IoT) have made fine-grained optimization of spectrum resources an urgent necessity. Thus, designing techniques able to extract knowledge from the spectrum in real time and select the optimal spectrum access strategy accordingly has become more important than ever. Moreover, 5G and beyond (5GB) networks will require complex management schemes to deal with problems such as adaptive beam management and rate selection. Although deep learning (DL) has been successful in modeling complex phenomena, commercially-available wireless devices are still very far from actually adopting learning-based techniques to optimize their spectrum usage. In this paper, we first discuss the need for real-time DL at the physical layer, and then summarize the current state of the art and existing limitations. We conclude the paper by discussing an agenda of research challenges and how DL can be applied to address crucial problems in 5GB networks. I. INTRODUCTION The wireless spectrum is undeniably one of nature’s most complex phenomena. This is especially true in the highly- dynamic context of the Internet of Things (IoT), where the Fig. 1: Key issues in today’s wireless optimization approaches. widespread presence of tiny embedded wireless devices seam- lessly connected to people and objects will make spectrum- wireless systems that can learn by themselves the optimal related quantities such as fading, noise, interference, and traffic spectrum actions to take given the current spectrum state. patterns hardly predictable with traditional mathematical mod- Concretely speaking, the big picture is to realize systems able els. Techniques able to perform real-time fine-grained spec- to distinguish on their own different spectrum states (e.g., trum optimization will thus become fundamental to squeeze based on noise, interference, channel occupation, and similar), out any spectrum resource available to wireless devices. and change their hardware and software fabric in real time There are a number of key issues – summarized in Fig. 1 to implement the optimal spectrum action [1, 2]. However, – that make existing wireless optimization approaches not despite the numerous recent advances, so far truly self-adaptive completely suitable to address the spectrum challenges men- and self-resilient cognitive radios have been elusive. On the tioned above. On one hand, model-driven approaches aim at other hand, the success of deep learning at the physical (i) mathematically formalize the entirety of the network, and layer (PHY-DL) in addressing problems such as modulation (ii) optimize an objective function. Although yielding optimal recognition [3], radio fingerprinting [4] and Medium Access solutions, these approaches are usually NP-Hard, and thus, Control (MAC) [5] has taken us many steps in the right unable to be run in real time. Moreover, they rely on a series direction [6]. Thanks to its unique advantages, deep learning (DL) can really be a game-changer, especially when cast in arXiv:2004.10113v2 [cs.NI] 7 Sep 2020 of modeling assumptions (e.g., fading/noise distribution, traffic and mobility patterns, and so on) that may not always be the context of a real-time hardware-based implementation. valid. On the other hand, protocol-driven approaches con- Existing work has mostly focused on generating spectrum sider a specific wireless technology (e.g., WiFi, Bluetooth or data and training models in the cloud. However, a number of Zigbee) and attempt to heuristically change parameters such key system-level issues still remain substantially unexplored. as modulation scheme, coding level, packet size, etc., based To this end, we notice that the most relevant survey work [7, 8] on metrics computed in real time from pilots and/or training introduces research challenges from a learning perspective symbols. Protocol-driven approaches, being heuristic in nature, only. Moreover, [9] and similar survey work focuses on the necessarily yield sub-optimal performance. application of DL to upper layers of the network stack. Since To obtain the best of both worlds, a new approach called DL was not conceived having the constraints and requirements spectrum-driven is being explored. In short, by using real- of wireless communications in mind, it is still unclear what are time machine learning (ML) techniques implemented in the the fundamental limitations of PHY-DL. Moreover, existing hardware portion of the wireless platform, we can design work has still not explored how PHY-DL can be used to address problems in the context of 5G and beyond (5GB) F. Restuccia and T. Melodia are with the Institute for the Wireless Internet of networks. For this reason, the first key contribution of this Things, Northeastern University, Boston, MA USA. Authors emails: ffrestuc, [email protected]. This paper has been accepted for publication in paper is to discuss the research challenges of real-time PHY- IEEE Communications Magazine. DL without considering any particular frequency band or radio 2 technology (Sections III and IV). The second key contribution learn I/Q patterns in the I/Q plane, making them amenable is the introduction of a series of practical problems that may to address different classification problems. Existing work, be addressed by using PHY-DL techniques in the context of indeed, has demonstrated that CNNs can be used for very 5GB (Section IV-C). Notice that 5GB networks are expected different problems, ranging from modulation recognition [3] to to be heavily based on millimeter-wave (mmWave) and ultra- radio fingerprinting [4]. CNNs also keeps latency and energy wideband communications, hence our focus on these issues. consumption constant, as explained in Section II. Fig. 2 shows Since an exhaustive compendium of the existing work in PHY- an example where a learning system is trained to classify DL is outside the scope of this manuscript, we refer the reader modulation, number of carriers and fingerprinting. While DL to [9] for an excellent survey. can concurrently recognize the three parameters, traditional learning requires different feature extraction processes for each II. WHY DEEP LEARNING AT THE PHYSICAL LAYER? of the classification outputs. This, in turn, increases hardware DL is exceptionally suited to address problems where consumption and hinders fine-tuning of the learning model. closed-form mathematical expressions are difficult to obtain Real-Time Fine Tuning. Model-driven optimization of- [10]. For this reason, convolutional neural networks (CNNs) fers predictable performance only when the model actually are now being “borrowed” by wireless researchers to ad- matches the reality of the underlying phenomenon being dress handover and power management in cellular networks, captured. This implies that model-driven systems can yield dynamic spectrum access, resource allocation/slicing/caching, sub-optimal performance when the model assumptions are video streaming, and rate adaptation, just to name a few. Fig. 2 different from what the network is actually experiencing. summarizes why traditional ML may not effectively address For example, a model assuming a Rayleigh fading channel real-time physical-layer problems. Overall, DL relieves from can yield incorrect solutions when placed in a Rician fading the burden of finding the right “features” characterizing a given environment. By using a data-driven approach, PHY-DL may wireless phenomenon. At the physical layer, this key advantage be easily fine-tuned through the usage of fresh spectrum data, comes almost as a necessity for at least three reasons, which which can be used to find in real time a better set of parameters are discussed below. through gradient descent. Hand-tailored model-driven systems Highly-Dimensional Feature Spaces. Classifying waveforms may result to be hard to fine-tune, as they might depend on ultimately boils down to distinguishing small-scale patterns a set of parameters that are not easily adaptable in real time in the in-phase-quadrature (I/Q) plane, which may not be (e.g., channel model). While DL “easily” accomplishes this clearly separable in a low-dimensional feature space. For goal by performing batch gradient descent on fresh input data, example, in radio fingerprinting we want to distinguish among the same is not true for traditional ML, where tuning can be hundreds (potentially thousands) of devices based on the extremely challenging since it would require to completely unique imperfections imposed by the hardware circuitry. While change the circuit itself. legacy low-dimensional techniques can correctly distinguish up to a few dozens of devices [11], DL-based classifiers III. DEEP LEARNING AT THE PHYSICAL LAYER: can scale up to hundreds of devices by learning extremely SYSTEM REQUIREMENTS AND CHALLENGES complex features in the I/Q space [4]. Similarly, O’Shea et al. [3] have demonstrated that on the 24-modulation dataset The target of this section is to discuss existing system- considered, DL models achieve on the average about 20% level challenges in PHY-DL, as well as the state of the art higher classification accuracy than legacy learning models in addressing these issues. For a more detailed compendium under noisy channel conditions. of the state of the art, the interested reader can take a look at the following comprehensive surveys [6–9]. Feature-based Approaches Convolutional Neural Networks To ease the reader into the topic, we summarize at a very high level the main components and operations of a Feature Neural “QPSK” HW Extraction Network (Modulation) Cores learning-based wireless device in Fig. 3. The core feature that distinguishes learning-based devices is that digital signal Feature Neural Convolutional Extraction “#8” Network Neural Weights processing (DSP) decisions are driven by deep neural networks (Fingerprint) Network (DNNs). In particular, in the Receiver (RX) DSP chain the Feature “QPSK” Neural Extraction “64” “64” CPU incoming waveform is first received and placed in an I/Q buffer Network (Carriers) “#8” (step 1).