The integration of drone technology in precision agriculture offers promising solutions for enhancing crop monitoring, optimizing resource management, and improving sustainability. This study investigates the application of UAV-based remote sensing in Sidi Bouzid, Tunisia, focusing on olive tree cultivation in a semi-arid environment. REMO-M professional drones equipped with RGB and multispectral sensors were deployed to collect high-resolution imagery, enabling advanced geospatial analysis. A comprehensive methodology was implemented, including precise flight planning, image processing, GIS-based mapping, and NDVI assessments to evaluate vegetation health. The results demonstrate the significant contribution of UAV imagery in generating accurate land use classifications, detecting plant health variations, and optimizing water resource distribution. NDVI analysis revealed clear distinctions in vegetation vigor, highlighting areas affected by water stress and nutrient deficiencies. Compared to traditional monitoring methods, drone-based assessments provided high spatial resolution and real-time data, facilitating early detection of agronomic issues. These findings underscore the pivotal role of UAV technology in advancing precision agriculture, particularly in semi-arid regions where climate variability poses challenges to sustainable farming. The study provides a replicable framework for integrating drone-based monitoring into agricultural decision-making, offering strategies to improve productivity, water efficiency, and environmental resilience. The research contributes to the growing body of knowledge on agricultural technology adoption in Tunisia and similar contexts, supporting data-driven approaches to climate-smart agriculture.
The detection of drones in complex and dynamic environments poses significant challenges due to their small size and background clutter. This study aims to address these challenges by developing a motion-based pipeline that integrates background subtraction and deep learning-based classification to detect drones in video sequences. Two background subtraction methods, Mixture of Gaussians 2 (MOG2) and Visual Background Extractor (ViBe), are assessed to isolate potential drone regions in highly complex and dynamic backgrounds. These regions are then classified using the ResNet18 architecture. The Drone-vs-Bird dataset is utilized to test the algorithm, focusing on distinguishing drones from other dynamic objects such as birds, trees, and clouds. By leveraging motion-based information, the method enhances the drone detection process by reducing computational demands. Results show that ViBe achieves a recall of 0.956 and a precision of 0.078, while MOG2 achieves a recall of 0.857 and a precision of 0.034, highlighting the comparative advantages of ViBe in detecting small drones in challenging scenarios. These findings demonstrate the robustness of the proposed pipeline and its potential contribution to enhancing surveillance and security measures.