![]() During the entire landing maneuver, vision sensors can couple with GNSS or IMU to obtain more reliable measurements. At close range, vision sensors can determine both the relative positions and attitudes between the UAV and the landing marker within sub-millimeter accuracy, information of which is essential for precise landing control. At a distance, a UAV may carry out preliminary detections on landing markers using machine vision, while relying on other navigational means such as the global navigation satellite system (GNSS) or an inertial measurement unit (IMU). Moreover, machine vision is robust to signal jamming and telemetry interference due to its passive nature. It is also lightweight, low-cost, energy consumption efficient, and friendly to stealth operations. One of the most significant advantages of machine vision is that it provides rich information about the surrounding environments without emitting radiation. To resolve this pressing issue, a widely accepted approach is to use machine vision to detect artificial landing markers for assisting UAV autonomous landing. With this information, a UAV could gradually minimize its distance to the landing site, descend to a proper altitude, and perform touch-down in the final descent phase. For a successful autonomous landing, a prerequisite is to know the precise location of the landing site. According to statistics, crashes and accidents are most likely to occur in the landing phase, jeopardizing the safety of the UAVs involved. ![]() Although launching a UAV is relatively easy, landing it is the most challenging part in many circumstances due to high risks and environmental uncertainties. For those missions requiring repeated flight operations where human intervention is impossible, autonomous takeoff and landing are essential and crucial capabilities for a UAV, which has been extensively studied by researchers from all over the world during the last few decades. ![]() Unmanned aerial vehicles are cost-efficient, highly maneuverable, and casualties free aerial units that have been broadly adopted in civil applications and military operations, such as surveillance, traffic and weather monitoring, cargo delivery, agricultural production, damage inspection, radiation mapping, and search and rescue (SAR), to name a few. Field experiments across a variety of outdoor nighttime scenarios with an average luminance of 5 lx at the marker locations have proven the feasibility and practicability of the system. Extensive evaluations have been conducted to demonstrate the robustness, accuracy, and real-time performance of the proposed vision system. We use a model-based enhancement scheme to improve the quality and brightness of the onboard captured images, then present a hierarchical-based method consisting of a decision tree with an associated light-weight convolutional neural network (CNN) for coarse-to-fine landing marker localization, where the key information of the marker is extracted and reserved for post-processing, such as pose estimation and landing control. In this paper, a novel vision system design is proposed to tackle UAV landing in outdoor extreme low-illumination environments without the need to apply an active light source to the marker. In contrast, very few researchers have investigated the possibility of landing in low-illumination conditions by employing various active light sources to lighten the markers. Although the existing approaches have resolved the problem of precise landing by identifying a specific landing marker using the UAV’s onboard vision system, the vast majority of these works are conducted in either daytime or well-illuminated laboratory environments. Landing an unmanned aerial vehicle (UAV) autonomously and safely is a challenging task.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |