Hello Tello Pilot!
Join our DJI Tello community & remove this banner.
Sign up

Tello Localization: Odometry

pgminin

Well-known member
Joined
Jan 7, 2019
Messages
103
Reaction score
100
I made a demo video of Tello localization using odometry.
This is position calculation from Tello vgx, vgy velocities provided by Tello state in sdk mode.
The analysis shows how quickly the error can accumulate, so odometry alone is unreliable.


Algorithm: dead-reckoning from motion model developed with Tello Vision Telemetry Lab

Dataset: telemetry generated with Tello Vision 1D App

Ground truth: mavic mini video by #manuelvenuti
 
Last edited:
  • Like
Reactions: juankastrol
Well, I am not expert but if I can sugest one thing, even it is stupid:
If one records straight paths, I mean North to South, East/West ,South/North, West/East , after this compare the real path with the recorded path, is it possible to achieve any kind of constant, or curve , even very discrete, that can be applied to the desired path and then correct it, even slightly?
 
  • Like
Reactions: pgminin
Yes, I think that a sort of calibration can improve velocity estimation performance but odometry will be still inherently inaccurate.
I'm working on another approach: it is Tello position estimation relative to detected person (assumed not moving).
It seems to work to get a better localuzation because it does not drift over time.
And next will be fusing the two estimation odometry and person relative position.
Sounds complicated but it is standard in current robotics localization practice!

So more videos to come!
 
Last edited:
Hi, your job is excellent! Yesterday one thing callled my attention, it was the almost failed fly of the Ingenuity drone on Mars. I saw in a blog that the way the dorne orients itself is based on sequence of images and after reading that I initiated some tests with python , cv2 and cv2.TM_SQDIFF method for template_matching, taking images , selecting a small area in the middle of the frame, trying the match and locating were the small are is located in the frame. This can give me (looking for the centroid) the subsequent position and so I can integrate this variation , find velocity, direction and so on. If I could use a Kalman filter to do fusion of your data and this method, maybe we can have a good positioning system.Sounds complicated...yeas, a little...when I have time to finish the template_matching part I will post here, when I have time...
 

Attachments

  • alvo.jpg
    alvo.jpg
    13.4 KB · Views: 5
  • foto2.jpg
    foto2.jpg
    48.5 KB · Views: 5
  • foto3.jpg
    foto3.jpg
    47.3 KB · Views: 5
  • objetos2.jpg
    objetos2.jpg
    71.1 KB · Views: 5
  • foto5.jpg
    foto5.jpg
    47.4 KB · Views: 4
  • foto4.jpg
    foto4.jpg
    49.6 KB · Views: 4
  • objetos3.jpg
    objetos3.jpg
    70 KB · Views: 3
  • objetos4.jpg
    objetos4.jpg
    73.4 KB · Views: 4
  • objetos5.jpg
    objetos5.jpg
    71 KB · Views: 4
  • Like
Reactions: pgminin
Hi, your job is excellent! Yesterday one thing callled my attention, it was the almost failed fly of the Ingenuity drone on Mars. I saw in a blog that the way the dorne orients itself is based on sequence of images and after reading that I initiated some tests with python , cv2 and cv2.TM_SQDIFF method for template_matching, taking images , selecting a small area in the middle of the frame, trying the match and locating were the small are is located in the frame. This can give me (looking for the centroid) the subsequent position and so I can integrate this variation , find velocity, direction and so on. If I could use a Kalman filter to do fusion of your data and this method, maybe we can have a good positioning system.Sounds complicated...yeas, a little...when I have time to finish the template_matching part I will post here, when I have time...
Thank you for the compliment, I'm following and very inspired by the Ingenuity drone too!
The navigation sensors (no gps) are very similar to Tello's one!
Your idea is very interesting, so keep exploring it!
I'm also trying Visual Odometry from the frontal camera, now I'm quite busy so I'll tell you in another post.
 
Ok, this is the beginning:
I don´t have a mirror to fit it to Tello camera so I went for a walk to test only the algorithm, based on:
1-cv2.TM_SQDIFF_NORMED and Python to slice central area of the acquired frame and perform the comparison at each 15 frames so the script can determine , by the centroid, what is the speed ( in pixels ) and direction.
2-Record the video showing the selected area window, the "vision window" on the main film and the speed and direction data on video layer.
3-Text files containind speed_data and direction data.

Conclusions:
1- Yes I need sensor fusion to add gyro,compass and accel , you can see that the speed is crazy sometimes and the direction flips pos/neg due to frame wobbling. Sometimes it just cannot find the correct match and gets confused.
2- Yes this is a initial fail (lol) but i´ll keep trying(lol).
3- No, it still does not deserve a github section, maybe when I have more victories than failures.
4- Well I´ll try harder.
link to video:
data files are attached
 

Attachments

  • speed_data.txt
    12.7 KB · Views: 5
  • dir_data.txt
    15.3 KB · Views: 2
  • Like
Reactions: pgminin
Very well done!
I didn't come up with the idea of the mirror because I need the frontal camera for object detection BUT ... your approach is very interesting because you can learn a lot about the navigation/odometry problem and can understand the downward navigation camera algorithm and explore sensor fusion. Indeed monocular odometry alone can give data but without the right scale. You need IMU/altimeter and make Visual Inertial Odometry.
So I'm going to inspect the data of your work in next days, keep me updated on your progress!
 
Thanks!
My idea was to place a mirror in the middle of the camera so I could see down bellow and forward.
The down vision was to be analysed.
But I saw a video were a guy placed a camera on a chickens head to make a video stabiliser and it worked. So I am thinking about using a mirror in "V" shape , pointed down so I can have 2 images and so make a differential computation. I think this will improve error rejection.
Off topics: I think chicken have this excellent vision-tracking-system because it is a byped and its tail does not help balance (dinossaurs used a tail to do so) , thus Nature developed a visual orientation system for them , should I say, an excellent one.
 
  • Like
Reactions: pgminin
Sorry, I think this format is better (CSV)
Please change the to csv ( can not upload csv files)
 

Attachments

  • speed_data_csv.txt
    7.7 KB · Views: 2
  • dir_data_csv.txt
    9.3 KB · Views: 0
  • Like
Reactions: pgminin
I made a new demo video of Tello localization using position estimation from person detection.
The detected person is assumed to be in a fixed position, so from 3d person position (used also in automode like orbit and follow) Tello can do self-localization.


odometry_localization_error_yellow.png



Odometry error (blue) compared to position from person detection error (yellow).
Pros: Estimated position error is noisier but it does not drift and most of the time is less than 2m.
Cons: There is a glitch due to false detection. moreover this approach requires a fixed person present in the FOV.
 
Last edited:
Hi! This the second test only to check the algorith.
I used this guy´s site
drone footage, I noticed:
1- Looks promissing when video is taken from some altitude because there are less details than closer view ( maybe thats why Ineguity turned off the optical navigation system when it went for landing) .
2- I checked for every 3 frames, instead of 15 of the first test.
3- Speed sometimes goes to zero because I have to filter it during every new frame loading ( of course new frame will give good match in place thus no displacement, no speed) .
4- I still did not define the speed in meters because it depends on altitude and calculation on 10 frames , besides , in this film the drone is panning up and down so the reference is not fix.
5 - Direction is function of arctan2 so 90 degree should be 0 degree.

Well, as usual I have the link to video and data.
link
 

Attachments

  • dir_data.csv.txt
    197 bytes · Views: 0
  • speed_data.csv.txt
    144 bytes · Views: 0
Well I think I am in the right path.
video
coarse.pngspeed.png
 

Attachments

  • dir_data.txt
    5.1 KB · Views: 1
  • speed.txt
    4.8 KB · Views: 1
Speed and Coarse are recorded at each 30 frames , the speed is given in pixels/sec so I need the altitude to calculate the aproximate speed, because the altitude is decreasing too .
In this movie, sometimes the camera is panning or tilting so the data is not reliable sometimes.
I will try attaching a mirror in my Tello , so all the images will be from a fixed reference.
Now that we have polar data it is easy to transform in retangular data.
 
Last edited:
  • Like
Reactions: pgminin
"Speed estimation is inaccurate and sometimes ia unreliable". In those words there is the odometry problem! Because even if after a while you recover a good speed estimation the errors are yet injected in the position estimation, and you can't recover it. So position estimation from speed is quite always drifting with time.

Here a very recent video on robust fusion of many odometry algorithms to improve it:

 
Yes, Thanks, I noticed that.
Mainly because I would need a radar to check the altitude in relationship to the ground (because barometers are referenced to sea level) and adjust the real speed based on a virtual speed in pixels.
As we know, Tello does not have this radar (at least available data) so I finished this "project" , but...lol...a can invert parameters and create a mapping system comparing the real speed vx,vx against the virtual or visual speed and make something like MRO ( Mars Reconaissance Orbiter) to create 3D maps...( or use google earth...) ...Well anyway it was fun and I´ve learn a lot about cv2 and python.
Thanks for the patience and info!
 
  • Like
Reactions: pgminin
Well, you don't need a radar because you have the tof (time of flight) altimeter! It works well under 10 m of heigth.
 
Hi guys, my name is Ali

I'm working on a project for Solar Panels inspection, which run fully autonomous while also preforming defect detection. at some point I needed some of the information of the path flight for further analysis and @pgminin method was really nice and it's really creative representation. I went to the Gethub page uploaded by @pgminin and found all the source codes that can preform the analysis on the obtained data from the Android App.

Unfortunately I do not have an access to an Android device to obtain the telemetry Data required to represent the autonomous path taken by the Tello which can only be provided by the application because I'm using an iPhone currently.

@pgminin so is it possible to get a python script that can enable me to obtain the telemetry data after flying? or even in real time?

I would really appreciate the help!!
 

New Posts

Members online

No members online now.

Forum statistics

Threads
5,690
Messages
39,934
Members
17,023
Latest member
Repiv

New Posts