With the increasing use of drones across various industries, the navigation and tracking of these unmanned aerial vehicles (UAVs) in challenging environments, namely GNSS-denied environments, have become critical issues. In this paper, we propose a novel method for a ground-based UAV tracking system using a solid-state LiDAR, which dynamically adjusts the LiDAR frame integration time based on the distance to the UAV and its speed. Our method fuses two simultaneous scan integration frequencies for high accuracy and persistent tracking, enabling reliable state estimation of the UAV even in challenging scenarios. The application of the Inverse Covariance Intersection method and Kalman filters allows for better tracking accuracy and can handle challenging tracking scenarios. Compared to previous works in solid-state lidar tracking, this paper presents a more complete and robust solution.
We have performed a number of experiments to evaluate the performance of the proposed tracking system and identify its limitations. Our experimental results demonstrate that the proposed method clearly outperforms the baseline method and ensures tracking is more robust across different types of trajectories.
We propose two simultaneous tracking estimators on two different scan frequencies running in parallel where the integration time, defining the number of scans to accumulate, is dynamically adjusted to optimize the point cloud density:
While the presented outcomes demonstrate the feasibility of our proposed approach, it is worth noting that the quantitative results rely on the assumption of the initial position being already known. To address this issue, we have developed a method to detect the target's starting location in outdoor scenarios. Specifically, we employed the same custom YOLOv5 model trained on panoramic signal images generated by an Ouster LiDAR
To minimize noise and artifacts when creating a single image, we first integrate a total of 30 frames. Later, the 3D point cloud is projected onto a 2D plane, taking into account both the field of view and image resolution. The transformation process considers both the intensity and distance of each point, combining them through a weighted sum operation to produce the final result. Using only intensity or distance to detect drones from the background becomes challenging for the YOLO model due to the proximity to ground and walls as well as the reflectivity of both the background and the drone being similar.
Furthermore, normalization is applied to ensure appropriate contrast in the resulting image. Upon obtaining the preliminary 2D image, its quality is enhanced through filtering and interpolation: we first identify areas with zero values and substitute them with constants to prevent visual discontinuities. There are two distinct cases that lead to zero-valued pixels after point cloud projection: the sky and other background regions where the emitted laser fails to reflect, and areas within the environment where objects might be present but the LiDAR does not scan. Differentiating between these two cases is crucial for the task of image completion, as it allows for an accurate understanding of the context surrounding the missing pixels.
To remove noise artifacts, we use binary thresholding and a nearest-neighbor interpolation to fill in missing or noisy regions, which results in smoother and more accurate images.
@misc{catalano2023uav,
title={UAV Tracking with Solid-State Lidars:Dynamic Multi-Frequency Scan Integration},
author={Iacopo Catalano and Ha Sier and Xianjia Yu and Jorge Pena Queralta and Tomi Westerlund},
year={2023},
eprint={2304.12125},
archivePrefix={arXiv},
primaryClass={cs.RO}
}