Distributed Radar Autofocus Imaging Using Deep Priors


Antenna position ambiguity is a common problem that affects radar imaging systems that are mounted on mobile platforms. Existing approaches that aim to recover a sharp radar image despite this ambiguity aim to estimate the shift in the antenna position by modeling the radar scene as a sparse image with a small number of tar- gets using explicit analytical models for the statistical distribution of the targets in a radar image. The radar imaging problem is then solved by alternating between estimating the radar image, followed by estimating the shift in the antenna positions, until convergence is reached. While such approaches have shown tremendous success, they still struggle to recover the true target positions and may arrive at incorrect local optima when the measurement noise level is high. In this work, we develop a data-driven learning-based strategy for modeling the image of the radar scene instead of relying on explicit analytical models. We adopt a residual Unet architecture of a neural network to act as a denoising operator which takes a backprojected radar image as input and outputs a true target image. While deep denoisers may generally result in unstable iterative algorithms, we introduce a simple filtering step that suppresses noise belonging to the null space of the radar operator from the iterates to stabilize the iterative procedure. We evaluate the effectiveness of our solution using simulated numerical experiments and demonstrate its superiority over the analytic signal prior.