Details
Original language | English |
---|---|
Qualification | Doctor of Engineering |
Awarding Institution | |
Supervised by |
|
Date of Award | 8 Aug 2024 |
Place of Publication | Hannover |
Publication status | Published - 12 Nov 2024 |
Abstract
Sustainable Development Goals
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Hannover, 2024. 114 p.
Research output: Thesis › Doctoral thesis
}
TY - BOOK
T1 - Hardware architectures for synthetic aperture radar autofocus
AU - Rother, Niklas
PY - 2024/11/12
Y1 - 2024/11/12
N2 - Synthetic Aperture Radar (SAR) imaging from small Unmanned Aerial Vehicles (UAVs) promises to be a powerful technology for applications in remote sensing and related fields. Often, onboard image generation and live image transmission to the ground would increase its practicability. Small UAVs are susceptible to wind, and uncompensated flight path deviations cause phase errors, which worsen the image quality. This can be overcome with autofocus techniques. The goal of this thesis is to determine how autofocus processing fits into the field of real-time SAR processing. For this purpose, an existing concept for onboard SAR processing was extended to include autofocus techniques. A special focus was put on high-performance processing to meet the goal of real-time image generation. The newly designed autofocus system was implemented on an Field Programmable Gate Array (FPGA) and a Transport Triggered Architecture (TTA) processor. In a first step, a possible, realistic UAV-based SAR system has been envisioned to provide a framework for the rest of this thesis. Following, multiple SAR autofocus algorithms were compared regarding their aptitude for an FPGA implementation. Backprojection Autofocus from Duersch and Long was selected as the most suitable. The image formation algorithm is well parallelizable and can exploit the parallel nature of an FPGA. Contrarily, the autofocus algorithm was found to be limited by the available memory bandwidth. On the selected FPGA platform, only three pixels could be processed in parallel, resulting in a 36.5× increase in runtime and 27.8× in energy compared to image formation alone. In terms of energy efficiency, the FPGA version was found to be more efficient than an implementation on an embedded Graphical Processing Unit (GPU) with 1.54×10^−8 J/Op and 2.23×10^−8 J/Op, respectively. As an alternative, TTA processors were explored. A customized processor was synthesized for the 22FDX technology from GlobalFoundries. Running at 700 MHz, power consumption was estimated at 201 mW for a single core. This results in an energy efficiency of 5.44 × 10 −8 J/Op, which is in the same order of magnitude as the FPGA version. However, to reach a practically relevant throughput, a multi-core implementation is required. Assuming a realistic imaging geometry, the FPGA implementation presented here allows for a flight speed of 3.41 m/s while performing live image formation and focusing. The work at hand therefore shows that full-size image generation and autofocus processing are possible in real-time and within the power and space constraints of a small UAV.
AB - Synthetic Aperture Radar (SAR) imaging from small Unmanned Aerial Vehicles (UAVs) promises to be a powerful technology for applications in remote sensing and related fields. Often, onboard image generation and live image transmission to the ground would increase its practicability. Small UAVs are susceptible to wind, and uncompensated flight path deviations cause phase errors, which worsen the image quality. This can be overcome with autofocus techniques. The goal of this thesis is to determine how autofocus processing fits into the field of real-time SAR processing. For this purpose, an existing concept for onboard SAR processing was extended to include autofocus techniques. A special focus was put on high-performance processing to meet the goal of real-time image generation. The newly designed autofocus system was implemented on an Field Programmable Gate Array (FPGA) and a Transport Triggered Architecture (TTA) processor. In a first step, a possible, realistic UAV-based SAR system has been envisioned to provide a framework for the rest of this thesis. Following, multiple SAR autofocus algorithms were compared regarding their aptitude for an FPGA implementation. Backprojection Autofocus from Duersch and Long was selected as the most suitable. The image formation algorithm is well parallelizable and can exploit the parallel nature of an FPGA. Contrarily, the autofocus algorithm was found to be limited by the available memory bandwidth. On the selected FPGA platform, only three pixels could be processed in parallel, resulting in a 36.5× increase in runtime and 27.8× in energy compared to image formation alone. In terms of energy efficiency, the FPGA version was found to be more efficient than an implementation on an embedded Graphical Processing Unit (GPU) with 1.54×10^−8 J/Op and 2.23×10^−8 J/Op, respectively. As an alternative, TTA processors were explored. A customized processor was synthesized for the 22FDX technology from GlobalFoundries. Running at 700 MHz, power consumption was estimated at 201 mW for a single core. This results in an energy efficiency of 5.44 × 10 −8 J/Op, which is in the same order of magnitude as the FPGA version. However, to reach a practically relevant throughput, a multi-core implementation is required. Assuming a realistic imaging geometry, the FPGA implementation presented here allows for a flight speed of 3.41 m/s while performing live image formation and focusing. The work at hand therefore shows that full-size image generation and autofocus processing are possible in real-time and within the power and space constraints of a small UAV.
U2 - 10.15488/18117
DO - 10.15488/18117
M3 - Doctoral thesis
CY - Hannover
ER -