That's true OpenPose requires a lot of processing power, but it seems there are other models for pose estimation that can run on smartphones. An example here (I never tried) : edvardHua/PoseEstimationForMobilethank you! This is the first Tello project I have seen that really impresses me. But it seems like OpenPose requries too much processing power for realtime use on a mobile device..
I have just realized that you were the author of an android app. I have read very good reviews on it, I will try it ! I imagine it must be quiet difficult to deploy such a technology (Openpose or more generally neural nets) whose efficiency depends very much on the available processing power, and that have to work equally well on a wide selection of phones. The Morse code would surely be easier to integrateYes, that one I had seen before. It does 2 fps on my reference phone (Samsung S4). I haven't tested this but I guess 5-8 fps is the minimum for reliable control.
Anyway, thanks for this project. Really impressive work.
I particularly enjoyed the morse code for takeoff
Interesting catch. Ryze must have changed this in recent firmwares. Tello would give position data only inflight back when I started TelloFpv development.Thanks to your post and from some tests I've just made, I begin to understand a bit better how the positionning system works. And it is not good news for my "camera operator" project. With the TelloPy package, the information I can get are labelled : mvo.vel_x,mvo.vel_y,mvo.vel_z,mvo.pos_x,mvo.pos_y,mvo.pos_z,imu.acc_x,imu.acc_y,imu.acc_z,imu.gyro_x,imu.gyro_y,imu.gyro_z,imu.q0,imu.q1,imu.q2, imu.q3,imu.vg_x,imu.vg_y,imu.vg_z
I don't know what mvo stands for, but I imagine it corresponds to data coming from the VPS.
I don't need to make the drone fly to see change in values.
In the graph below, I have drawn the trajectory (mvo.pos_x,mvo_pos_y) as I was walking holding the drone horizontally, but paying attention not to cover the sensors.
I walked 3 times along the same rectangular path (5mx0.5cm).
Too much variations to be usable.
Another test: if I hold the drone perfectly still and I move a book about 40 cm below, the values mvo.pos_* change as if the drone was moving.
In contrast, if I calculate the yaw angle from the quaternion, the values I get seems much more consistant, even if I "shake" the drone in all directions. But I agree with you that the IMU alone will not give acceptable results for the position.
The use of markers could help but would be too burdensome for my project.
Never mind, I have other ideas I want to explore.
Well, I wish it would be so simple, but currently it is not really the case. First, you need a PC with a powerfull GPU to run comfortably Openpose. Secondly, as I explain in my github geaxgx/tello-openpose , you need to install some dependancies. Openpose is one of them. Openpose installation is not as straightforward as installing a python package, because you will need to compile it, but it is well explained on their website.i hope all of these commands are bundled into a single software with simple menu and commands to be run by anyone on pc
Well, I wish it would be so simple, but currently it is not really the case. First, you need a PC with a powerfull GPU to run comfortably Openpose. Secondly, as I explain in my github geaxgx/tello-openpose , you need to install some dependancies. Openpose is one of them. Openpose installation is not as straightforward as installing a python package, because you will need to compile it, but it is well explained on their website.
Thx for your comment !@geaxgx thanks so much for sharing such an amazing project!
A small and humble contribution if you allow me would be to add
params["net_resolution"] = "160x80" (or different combination depending on your GPU) after params["number_people_max"] = number_people_max in Class OP __Init__.
As I have a small GeForce MX150 with 2GB it can only take 160x80.
One thing I couldn't find is how to turn off the log/debug information being printed on the console.
There are so many of them like "Tello: 16:54:32.462: Info: video data 378513 bytes 184.7KB/sec" that I don't need now that I would like to turn them off. Thank you.
Hello @geaxgx , with an MX150, when I switch on openpose I have a FPS of ~6.Thx for your comment !
Giving the possibility to change the net_resolution parameter is a good idea. I haven't try myself with a low resolution, but it may help many people that don't have a poweful GPU. How much fps do you get with openpose on your MX150 ?
I don't have time in the near future to make and tests modifications on my code, but later I will try to do it.
For the log verbosity, the messages like the video throughput are from the tellopy library.
The verbosity is hardcoded in my code. Probably something I could improve too
In the short term, in tello_openpose.py, you can search for :
Connect to the drone, start streaming and subscribe to events
if self.log_level :
and replace the 2 last lines by:
This way, you should get only the errors from tellopy.
To make the Youtube video, I have used a screen recorder. But if I wanted to programmatically record a video, I would do like you (for instance, when you call "python OP.py -o output.avi", it records the result in a file).Hello @geaxgx , with an MX150, when I switch on openpose I have a FPS of ~6.
Thanks for the hint on log level! It works well.
Now I'm looking at recording the video to a mp4.
I've been able to toggle the recording when I hit "r" but the when I read the .mp4 with VLC I see nothing.
In __main__ I added the following 2 lines and I added the file in the call to main.
fourcc = cv2.VideoWriter_fourcc(*'H264')
outputfile = cv2.VideoWriter("VideoOutput.mp4",fourcc,30,(640,480))
main(use_multiprocessing=args.multiprocess, log_level=args.log_level, outputfile = outputfile)
Have you managed to write the video output to a file on your side?