Low-complexity high performance receiver for nonlinearly distorted OFDM signals

The orthogonal frequency division modulation (OFDM) waveform suffers from the problem of high peak to average power ratio (PAPR), which leads to the full load operation of the high power amplifier. Peak clipping technology is a popular and widely used method to handle the PAPR of OFDM. However, the...

Full description

Saved in:
Bibliographic Details
Main Author: Shao, Shengke
Other Authors: Guan Yong Liang
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/164942
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The orthogonal frequency division modulation (OFDM) waveform suffers from the problem of high peak to average power ratio (PAPR), which leads to the full load operation of the high power amplifier. Peak clipping technology is a popular and widely used method to handle the PAPR of OFDM. However, the nonlinear and clipping distortion introduced by peak clipping technology have a significantly negative influence on the communication system performance, unexpectedly increasing the symbol error rate (SER). Existing efforts have been reported to recover clipping distortion signal, however, there are some complicated formulas in their calculation procedures , which leads to the current models hard to be deployed in practice. To address this problem, we proposed a machine learning (ML) based anti-clipping receiver algorithm for OFDM to alleviate the negative influence of peak-clipping technology, which has significantly simplified the signal recovering procedures. Specifically, a deep learning relatio nship is developed by linking the clipped signals and their original signals. This relationship is implemented using long short term memory (LSTM), capturing the temporal structure of the training data generated by the clipped signal of the transmitter in the conventional OFDM system. For implementation, the received data can be recovered by applying the well trained LSTM model. According to the simulation results, the proposed ML based scheme is able to recover the clipped (nonlinear distorted) OFDM signal with a much better symbol error rate t han a conventional OFDM system without clipping on multipath fading channel. When transmitted signal is clipped at a threshold equal to the average signal power (clipping ratio is equal to 1) the proposed method achieves a same symbol error rate of 10^−4 with 0.1 signal transmitting power than that of conventional systems. Key words: Orthogonal frequency division modulation, peak to average power ratio, machine learning , peak-clipping, anti-clipping algorithm