Hello Tello Pilot!
Join our DJI Tello community & remove this banner.
Sign up

Face tracking with tello and gocv

I have this working with a PS3 controller via bluetooth on lubuntu. Great work, thank you. Waiting for amazon to deliver a xiaomi wifi range extender to see if that helps with quality. wondering what else we can do with opencv here, like a “wall follow” code that sends tello around the perimeter of a room at a constant height
 
  • Like
Reactions: javaguy
I have this working with a PS3 controller via bluetooth on lubuntu. Great work, thank you. Waiting for amazon to deliver a xiaomi wifi range extender to see if that helps with quality. wondering what else we can do with opencv here, like a “wall follow” code that sends tello around the perimeter of a room at a constant height
I did some experimentation with exactly this as well, but the short answer is, it isn't really possible with tello and openCV.

The tello has no front facing sensor other than the camera, so there is no way to get the drones relative position in reference to an object - only an estimated position relative to some other, known position.
The way the face tracking here works, and the reason you've got to toggle it on and off, is it requires a reference frame of your face, and attempts to go forwards and backwards to keep your face the same size as it was when tracking was turned on.
As far as tracking the perimeter of a room/wall following (I was trying to do a sort of autonomous image mapping of rooms), my drone would just keep crashing into walls because there isn't enough recognizable data on a wall (unless theres a picture or something on it) to be able to classify it properly once it extends past the boundaries of the frame.

It would probably work if you put a large sticker or something up on every wall you want it to follow. Could also work if using better image analyzing software like tensorflow.

The most interesting application that would work is probably putting up stickers all around an area that represent drone commands and having the drone follow those commands to be able to properly follow walls and go in and out of rooms and stuff.
 
I did some experimentation with exactly this as well, but the short answer is, it isn't really possible with tello and openCV.

The tello has no front facing sensor other than the camera, so there is no way to get the drones relative position in reference to an object - only an estimated position relative to some other, known position.
The way the face tracking here works, and the reason you've got to toggle it on and off, is it requires a reference frame of your face, and attempts to go forwards and backwards to keep your face the same size as it was when tracking was turned on.
As far as tracking the perimeter of a room/wall following (I was trying to do a sort of autonomous image mapping of rooms), my drone would just keep crashing into walls because there isn't enough recognizable data on a wall (unless theres a picture or something on it) to be able to classify it properly once it extends past the boundaries of the frame.

It would probably work if you put a large sticker or something up on every wall you want it to follow. Could also work if using better image analyzing software like tensorflow.

The most interesting application that would work is probably putting up stickers all around an area that represent drone commands and having the drone follow those commands to be able to properly follow walls and go in and out of rooms and stuff.

Dear javaguy

as far as I know, "following the walls" is something like road tracking (e.g. line tracking) with a higher degree of difficult. At least, the area must be large enough so that the system (drone, pc... whatever) could "see" the wall, rather than "something in front".
 
Some more info (I am researching it right now): tum_ardrone uses PTAM algorithm, but this is pretty old. There is improved implementation in ROS named ethzasl_ptam and also newer standalone implementation which needs only OpenCV named gptam.

But now there are also better algorithms: ORB-SLAM2 (indirect sparse algorithm similar to PTAM, but with much better results), LSD-SLAM (direct semi-dense algorithm) and DSO (direct sparse algorithm, but this is only visual odometry, not complete SLAM). And slightly less known (but also interesting) monocular VO and/or SLAM algorithms are SVO, REMODE, DPPTAM, DTAM, ROVIO, OKVIS, VINS-Mono, GTSAM. Looking at all of it, I would choose ORB-SLAM2 for now - seems complete, simple, well implemented and documented and most universal.

One more finding: sparse VO/SLAM algorithms (feature-based) are probably not that good for obstacle avoidance, you can apparently miss objects without too much features (like cabinet wall without texture). So dense or semi-dense algorithms would be better. Maybe LSD-SLAM is therefore better than ORB-SLAM2 for obstacle avoidance. There is alse this code, which could be maybe used as a starting point: hypharos_ardrone_navigation (ARDrone autonomous indoor navigation by integrating lsd-slam with imu through least square method)
 
Last edited:
Some more info (I am researching it right now): tum_ardrone uses PTAM algorithm, but this is pretty old. There is improved implementation in ROS named ethzasl_ptam and also newer standalone implementation which needs only OpenCV named gptam.

But now there are also better algorithms: ORB-SLAM2 (indirect sparse algorithm similar to PTAM, but with much better results), LSD-SLAM (direct semi-dense algorithm) and DSO (direct sparse algorithm, but this is only visual odometry, not complete SLAM). And slightly less known (but also interesting) monocular VO and/or SLAM algorithms are SVO, REMODE, DPPTAM, DTAM, ROVIO, OKVIS, VINS-Mono, GTSAM. Looking at all of it, I would choose ORB-SLAM2 for now - seems complete, simple, well implemented and documented and most universal.

One more finding: sparse VO/SLAM algorithms (feature-based) are probably not that good for obstacle avoidance, you can apparently miss objects without too much features (like cabinet wall without texture). So dense or semi-dense algorithms would be better. Maybe LSD-SLAM is therefore better than ORB-SLAM2 for obstacle avoidance. There is alse this code, which could be maybe used as a starting point: hypharos_ardrone_navigation (ARDrone autonomous indoor navigation by integrating lsd-slam with imu through least square method)

This is all very interesting, thank you for the research!
I'll for sure be taking a look through those to see if I can get any sort of practical object avoidance/detection application going.

The AR drone is definitely a lot easier to develop on. A navigation program I made for that worked extremely well. The main difference is you have direct access to the drone IMU and internal sensor data, so you've got pretty accurate calculations as far as where the drone is relative to its previous location. I initially tried something similar with the Tello using its speed and direction data, but unfortunately calculations based on those readings were unusable.
 
Hi. Javaguy.

Thank you for your great work.
Setup and launch is successful. Face tracking also seems to work well. But the video is very noisy. How can I solve it?

Thanks for your help!!
 

New Posts

Members online

No members online now.

Forum statistics

Threads
5,690
Messages
39,934
Members
17,023
Latest member
Repiv

New Posts