Intelligent visual navigation solution
At present, there are three main types of electric multi-rotor UAV obstacle avoidance systems on the market, namely ultrasonic, TOF (a type of laser radar ranging) and visual navigation. The development of autonomous cameras and image processing technologies has led to the development of visual navigation technology. Visual navigation is a new technology that uses the camera to collect images from the surrounding environment, filter and calculate the image, complete its own pose determination and path recognition, and make navigation decisions. Due to the passive working mode of visual navigation, the device is simple and low in cost, and its application range is wide. The most important feature is the autonomy and real-time of visual navigation. It does not rely on any external equipment, and only needs to calculate the information in the storage system and the environment to obtain navigation information.
The main functions of the UAV intelligent visual navigation module are realized by visual navigation and positioning, binocular obstacle avoidance, downward perception and positioning, down target recognition, and accurate landing.
The intelligent visual navigation module of the Firefly Workshop has a binocular camera, a monocular VIO fisheye camera, and a visible light camera to the ground. And with industrial-grade CAN bus interface, integrated advanced machine vision algorithm, deep convolutional neural network (APQ8096), can provide industrial autonomous drones with forward autonomous obstacle avoidance, visual odometer, high-precision visual positioning, down Powerful machine vision features such as target recognition and intelligent landing.
一、 VIO visual inertia measurement
VIO (Visual Intertial Odometry) is a visual inertial measurement. In an unknown and lacking feature environment, the inertial sensor, forward and downward cameras (covering with the front and bottom of the wide-angle camera) are used simultaneously, and the image is extended by the extended Kalman filter algorithm. Fusion with IMU data, tracking position and direction, determining their position in the relative coordinate system, can be applied to route planning. The origin of the relative coordinate system can be configured. The visual navigation module calculates the flight miles based on the VIO measurement data and the origin position, and can be used as an aircraft flight odometer.
二、The dual current positioning
The current use of dual camera visual information and flight control IMU information to calculate the current position and attitude of the aircraft (relative to the starting position and attitude), and calculate the current speed of the aircraft through the time difference of adjacent frames, and feedback to the flight control to achieve Accurate positioning and accurate speed measurement of the aircraft.
Real-time acquisition of left and right binocular data, and polar line correction, while collecting IMU data and flight control attitude information at high frequency (above 200Hz), using binocular VIO algorithm to calculate the current position and attitude of the aircraft in real time, combined with timestamp information Calculate the speed of the aircraft, and feedback the position information and speed information to the flight controller in real time to achieve precise hovering and precise positioning of the aircraft. Binocular obstacle avoidance mainly consists of two parts: depth map calculation and obstacle avoidance logic.
The software algorithm (running in APQ8074) performs polar line correction and depth calculation, obtains the depth map, uses the flight control API to acquire the attitude of the aircraft in real time, plans the safe flight route in real time in the obstacle avoidance logic, and feeds the flight path to the flight control. Realize aircraft obstacle avoidance. Disturbance of binocular obstacles: Accurately identify obstacles within 0.8~40m. When the speed of the aircraft is less than 25m/s, it can bypass the obstacle or hover 2m ahead of the obstacle.
三、 The downward visual positioning
Downward visual positioning, horizontal positioning by moving image analysis, the available height range of positioning is 0.25-25m, and the flying speed is ≤10m/s. When using the downward visual positioning, a clear texture is required below the flight area with an illumination of >15 lux. Downward visual positioning, the downward video can be transparently transmitted to the drone's onboard computer or back to the ground station for identifying the ground environment where the aircraft is landing.
The user clicks on "Precision Landing". At this time, if there is a landing sign on the preview interface, or landing target, such as the roof, QR code, H character. The vision module sends the target information to the drone for flight control so that the drone can land on the target. The intelligent visual navigation module consists of a CPU, a main camera module, a fisheye camera module, a binocular camera module, a down-going binocular module, an IMU, and power management.
The main function:1. Binocular Camera
- Image sensor: OV7251 1/7.5", CMOS, 640*480;
- Lens: FOV72°D, f/2.0, depth of field focus to infinity;
2. Fisheye VIO Camera
- Image sensor: OV7251 1/7.5", CMOS, 640*480; 45° forward mounting
- Lens: FOV166°D, f/2.0, depth of field 0.05m to infinity;
3. Visible to the ground Camera IMX214 (with Qualcomm 801)
- Image sensor: IMX214 1/3.06"; vertical to ground installation;
- Lens: FOV75°D, f/2.2, depth of field 1.86m to infinity;
4. Visible to the ground Camera OV12895 (with Qualcomm 820)
- Image sensor: OV12895 1/2.3" for replacing the IMX214 under the Qualcomm 820 platform; vertical to ground installation;
- Lens: FOV80°D, f/2.0, depth of field 2.07m to infinity;
- Video output: USB 2.0, support OTG;
- Control interface: CAN, RS232;
- Power supply: DC 8V-12V;
Application areas:Industrial drones, intelligent logistics vehicles, intelligent robots, smart factories and other applications
Technical support services:
Hardware system block diagram