Hello Tello Pilot!
Join our DJI Tello community & remove this banner.
Sign up

Applying computer vision techniques to Tello

thank you! This is the first Tello project I have seen that really impresses me. But it seems like OpenPose requries too much processing power for realtime use on a mobile device..
That's true OpenPose requires a lot of processing power, but it seems there are other models for pose estimation that can run on smartphones. An example here (I never tried) : edvardHua/PoseEstimationForMobile
 
  • Like
Reactions: Dro-neu-tech
Yes, that one I had seen before. It does 2 fps on my reference phone (Samsung S4). I haven't tested this but I guess 5-8 fps is the minimum for reliable control.


Anyway, thanks for this project. Really impressive work.
I particularly enjoyed the morse code for takeoff :)
 
  • Like
Reactions: Dro-neu-tech
Yes, that one I had seen before. It does 2 fps on my reference phone (Samsung S4). I haven't tested this but I guess 5-8 fps is the minimum for reliable control.


Anyway, thanks for this project. Really impressive work.
I particularly enjoyed the morse code for takeoff :)

I have just realized that you were the author of an android app. I have read very good reviews on it, I will try it ! I imagine it must be quiet difficult to deploy such a technology (Openpose or more generally neural nets) whose efficiency depends very much on the available processing power, and that have to work equally well on a wide selection of phones. The Morse code would surely be easier to integrate :)

I am still discovering the Tello. May I take the opportunity to ask you a few questions on the IMU, if you don't mind ? Is it what you are using in your app for the "return to home autopilot" ? What is the precision of the IMU ? Do you think it is possible to replay a precise circuit (inside a house) by recording the data from the IMU ? Thx.
 
  • Wow
Reactions: Dro-neu-tech
I will send you a promo code for TelloFpv in a direct conversation to play with. Be my guest and thank you for a really awesome video!

As soon as I have the next 1-2 releases done I'll play a bit with CV.
But I don't think such an App would be a commercial success. The functionality is super cool, but it would need to work reliably on average phones. Maybe in 2-3 years with new CV frameworks and better phones.


As for the IMU: I don't use it (well not directly). Accuracy of an IMU alone could not produce acceptable results, errors would add up quickly.

Tello uses the downfacing VPS camera and IMU data to calculate a very accurate position onboard. If not in "SDK" mode Tello can send lots of telemetry data to the phone, including position data. So its fairly easy to bring Tello back home.

Indoors with good light the 2D position is very accurate (altitude is not really precise). To replay an indoor course you'd need to make sure you know the exact starting position and direction. Should be possible to calibrate after takeoff with some QR codes or other markers. Maybe re-calibrate with a few markers along the way.

The native non-SDK communication is more complicated than the SDK commands as its all binary with checksums and all that. There is no protocol documentation that I am aware of beside what is available in the various threads on this site, the Tellopilots Wiki and Krag's C# tellolib.
 
  • Like
Reactions: Dro-neu-tech
Thank you very much volatello ! I am impatient to have an opportunity to try TelloFPV outside (too much wind these days where I live).

Thanks also for your information about the IMU. Here is the idea I have in mind: the goal is to use the Tello as a camera operator when shooting a video. There would be 2 phases :
1) manual recording of a circuit : I move in the room where I want to shoot the video, holding the Tello in my hand (motors off) and each time I "enter" a predefined morse code, the programs records the position of the Tello (x,y,z) + the yaw orientation.
2) automatic replay of the circuit: the programs makes the Tello fly and pass by the sequence of the recorded points. My hope is that the recorded video during the replay will contain beautiful travellings :cool:
As you said, the 2 phases should begin by a position calibration with a marker.
Well, maybe it is a bit unrealistic, but I will have to try to know.
 
  • Like
Reactions: Dro-neu-tech
Tello doesn't know its position when you hold it in your hands - for that it needs to fly. So you would have to fly the circuit by manual control, and record key positions and direction along the way. Some autopilot function is required to move tello from point to point.

The initial position calibration using markers will be required ensure Tello's position and heading is exactly the same on each run.
Indoors in good conditions the VPS does an excellent job: Accuracy after 20m flight is within maybe 30cm. Good conditions means no tables, chairs, or other significant changes of ground level, no polished tiles or plain grey carpets without visual markers, no dark areas, no high speed flights.
Additional markers to re-calibrate underway may be required to optimize accuracy.
 
  • Like
Reactions: Dro-neu-tech
Thanks to your post and from some tests I've just made, I begin to understand a bit better how the positionning system works. And it is not good news for my "camera operator" project. With the TelloPy package, the information I can get are labelled : mvo.vel_x,mvo.vel_y,mvo.vel_z,mvo.pos_x,mvo.pos_y,mvo.pos_z,imu.acc_x,imu.acc_y,imu.acc_z,imu.gyro_x,imu.gyro_y,imu.gyro_z,imu.q0,imu.q1,imu.q2, imu.q3,imu.vg_x,imu.vg_y,imu.vg_z

I don't know what mvo stands for, but I imagine it corresponds to data coming from the VPS.
I don't need to make the drone fly to see change in values.
In the graph below, I have drawn the trajectory (mvo.pos_x,mvo_pos_y) as I was walking holding the drone horizontally, but paying attention not to cover the sensors.
I walked 3 times along the same rectangular path (5mx0.5cm).
3956
Too much variations to be usable.
Another test: if I hold the drone perfectly still and I move a book about 40 cm below, the values mvo.pos_* change as if the drone was moving.

In contrast, if I calculate the yaw angle from the quaternion, the values I get seems much more consistant, even if I "shake" the drone in all directions. But I agree with you that the IMU alone will not give acceptable results for the position.
The use of markers could help but would be too burdensome for my project.

Never mind, I have other ideas I want to explore.
 
  • Like
Reactions: Dro-neu-tech
Thanks to your post and from some tests I've just made, I begin to understand a bit better how the positionning system works. And it is not good news for my "camera operator" project. With the TelloPy package, the information I can get are labelled : mvo.vel_x,mvo.vel_y,mvo.vel_z,mvo.pos_x,mvo.pos_y,mvo.pos_z,imu.acc_x,imu.acc_y,imu.acc_z,imu.gyro_x,imu.gyro_y,imu.gyro_z,imu.q0,imu.q1,imu.q2, imu.q3,imu.vg_x,imu.vg_y,imu.vg_z

I don't know what mvo stands for, but I imagine it corresponds to data coming from the VPS.
I don't need to make the drone fly to see change in values.
In the graph below, I have drawn the trajectory (mvo.pos_x,mvo_pos_y) as I was walking holding the drone horizontally, but paying attention not to cover the sensors.
I walked 3 times along the same rectangular path (5mx0.5cm).

Too much variations to be usable.
Another test: if I hold the drone perfectly still and I move a book about 40 cm below, the values mvo.pos_* change as if the drone was moving.

In contrast, if I calculate the yaw angle from the quaternion, the values I get seems much more consistant, even if I "shake" the drone in all directions. But I agree with you that the IMU alone will not give acceptable results for the position.
The use of markers could help but would be too burdensome for my project.

Never mind, I have other ideas I want to explore.
Interesting catch. Ryze must have changed this in recent firmwares. Tello would give position data only inflight back when I started TelloFpv development.
Keep us posted, seems you are working on interessting and challenging projects
 
  • Like
Reactions: Dro-neu-tech
i hope all of these commands are bundled into a single software with simple menu and commands to be run by anyone on pc
Well, I wish it would be so simple, but currently it is not really the case. First, you need a PC with a powerfull GPU to run comfortably Openpose. Secondly, as I explain in my github geaxgx/tello-openpose , you need to install some dependancies. Openpose is one of them. Openpose installation is not as straightforward as installing a python package, because you will need to compile it, but it is well explained on their website.
 
Well, I wish it would be so simple, but currently it is not really the case. First, you need a PC with a powerfull GPU to run comfortably Openpose. Secondly, as I explain in my github geaxgx/tello-openpose , you need to install some dependancies. Openpose is one of them. Openpose installation is not as straightforward as installing a python package, because you will need to compile it, but it is well explained on their website.


I guess this summer i have a cool project to work on. (or lets say copy and paste the codings since im ignorant when it comes to programming. Lol, Thanks bro!!
 
Hi All,

Anyone can help to list down step by step installation of


OpenCV, pynput, pygame :
Mainly used for the UI (display windows, read keyboard events, play sounds). Any recent version should work.
Openpose :
I use the official release CMU-Perceptual-Computing-Lab/openpose
Installed (with Python API) as explained here : CMU-Perceptual-Computing-Lab/openpose

I am using Ubuntu 19.04. Please enlighten me how to check all these program has been installed into my computer?

Thanks

Best Regards,
frequenccy
 
@geaxgx thanks so much for sharing such an amazing project!

A small and humble contribution if you allow me would be to add
params["net_resolution"] = "160x80" (or different combination depending on your GPU) after params["number_people_max"] = number_people_max in Class OP __Init__.
As I have a small GeForce MX150 with 2GB it can only take 160x80.

One thing I couldn't find is how to turn off the log/debug information being printed on the console.
There are so many of them like "Tello: 16:54:32.462: Info: video data 378513 bytes 184.7KB/sec" that I don't need now that I would like to turn them off. Thank you.
 
@geaxgx thanks so much for sharing such an amazing project!

A small and humble contribution if you allow me would be to add
params["net_resolution"] = "160x80" (or different combination depending on your GPU) after params["number_people_max"] = number_people_max in Class OP __Init__.
As I have a small GeForce MX150 with 2GB it can only take 160x80.

One thing I couldn't find is how to turn off the log/debug information being printed on the console.
There are so many of them like "Tello: 16:54:32.462: Info: video data 378513 bytes 184.7KB/sec" that I don't need now that I would like to turn them off. Thank you.

Thx for your comment !

Giving the possibility to change the net_resolution parameter is a good idea. I haven't try myself with a low resolution, but it may help many people that don't have a poweful GPU. How much fps do you get with openpose on your MX150 ?
I don't have time in the near future to make and tests modifications on my code, but later I will try to do it.

For the log verbosity, the messages like the video throughput are from the tellopy library.
The verbosity is hardcoded in my code. Probably something I could improve too :)
In the short term, in tello_openpose.py, you can search for :
def init_drone(self):
"""
Connect to the drone, start streaming and subscribe to events
"""
if self.log_level :
self.drone.log.set_level(2)

and replace the 2 last lines by:
self.drone.log.set_level(0)
This way, you should get only the errors from tellopy.
 
Thx for your comment !

Giving the possibility to change the net_resolution parameter is a good idea. I haven't try myself with a low resolution, but it may help many people that don't have a poweful GPU. How much fps do you get with openpose on your MX150 ?
I don't have time in the near future to make and tests modifications on my code, but later I will try to do it.

For the log verbosity, the messages like the video throughput are from the tellopy library.
The verbosity is hardcoded in my code. Probably something I could improve too :)
In the short term, in tello_openpose.py, you can search for :
def init_drone(self):
"""
Connect to the drone, start streaming and subscribe to events
"""
if self.log_level :
self.drone.log.set_level(2)

and replace the 2 last lines by:
self.drone.log.set_level(0)
This way, you should get only the errors from tellopy.

Hello @geaxgx , with an MX150, when I switch on openpose I have a FPS of ~6.
Thanks for the hint on log level! It works well.
Now I'm looking at recording the video to a mp4.
I've been able to toggle the recording when I hit "r" but the when I read the .mp4 with VLC I see nothing.


In __main__ I added the following 2 lines and I added the file in the call to main.
fourcc = cv2.VideoWriter_fourcc(*'H264')
outputfile = cv2.VideoWriter("VideoOutput.mp4",fourcc,30,(640,480))

main(use_multiprocessing=args.multiprocess, log_level=args.log_level, outputfile = outputfile)

Have you managed to write the video output to a file on your side?
 
Hello @geaxgx , with an MX150, when I switch on openpose I have a FPS of ~6.
Thanks for the hint on log level! It works well.
Now I'm looking at recording the video to a mp4.
I've been able to toggle the recording when I hit "r" but the when I read the .mp4 with VLC I see nothing.


In __main__ I added the following 2 lines and I added the file in the call to main.
fourcc = cv2.VideoWriter_fourcc(*'H264')
outputfile = cv2.VideoWriter("VideoOutput.mp4",fourcc,30,(640,480))

main(use_multiprocessing=args.multiprocess, log_level=args.log_level, outputfile = outputfile)

Have you managed to write the video output to a file on your side?

To make the Youtube video, I have used a screen recorder. But if I wanted to programmatically record a video, I would do like you (for instance, when you call "python OP.py -o output.avi", it records the result in a file).
But I have already noticed that with very short videos produced with a VideoWriter call, vlc seems to read the video but displays nothing. By enabling the loop button on vlc, on the second and following loops, the video will be displayed correctly. I have no explanation for this strange behavior :)
 

New Posts

Members online

No members online now.

Forum statistics

Threads
5,690
Messages
39,934
Members
17,023
Latest member
Repiv

New Posts