- Joined
- Apr 21, 2022
- Messages
- 4
- Reaction score
- 0
Hi,
I just got my drone today. I have the basic Tello drone (combo pack), not the EDU drone. I've tried various different python libraries, such as DJITelloPy, TelloPy and tello-asyncio, in order to interact with the drone programmatically.
However, first of all I keep on getting the "invalid IMI" error for any movement that specifies a position. Is that because my drone is not the EDU version? There is enough lighting in the room, so not sure why this would happen. I can still send direct rc commands, so at least I can still move the drone as such.
Secondly, once in the air, my drone doesn't stay nice & stationary like in the videos, but instead tends to drift all over the place. Even up & down. That's quite annoying and I am not sure what to do. I checked all the blades and they're properly installed. I also "calibrated the IMU" but I am not sure that helped at all - it may have made it worse.
Thirdly, the video feed is MASSIVELY delayed when reading the webcam feed programmatically vs the experience on my Android phone. I tried the proposed solution from a member of this forum to separate the video and drone control to separate threads, and also ran this example: tello-asyncio/video_opencv.py at main · robagar/tello-asyncio - to no avail. The video feed is at least 1.5 seconds behind the live action. I should note that I have an extremely beefy PC, so it would seem very unlikely to be a hardware issue. This is the most important issue for me to fix, because I want to apply machine learning on the video feed, and if it is delayed that drastically, then this is impossible to work with. I read that an extender might help, but again, on Android it works, and the drone is right next to the PC, so that's all somewhat weird.
Thanks for your help
I just got my drone today. I have the basic Tello drone (combo pack), not the EDU drone. I've tried various different python libraries, such as DJITelloPy, TelloPy and tello-asyncio, in order to interact with the drone programmatically.
However, first of all I keep on getting the "invalid IMI" error for any movement that specifies a position. Is that because my drone is not the EDU version? There is enough lighting in the room, so not sure why this would happen. I can still send direct rc commands, so at least I can still move the drone as such.
Secondly, once in the air, my drone doesn't stay nice & stationary like in the videos, but instead tends to drift all over the place. Even up & down. That's quite annoying and I am not sure what to do. I checked all the blades and they're properly installed. I also "calibrated the IMU" but I am not sure that helped at all - it may have made it worse.
Thirdly, the video feed is MASSIVELY delayed when reading the webcam feed programmatically vs the experience on my Android phone. I tried the proposed solution from a member of this forum to separate the video and drone control to separate threads, and also ran this example: tello-asyncio/video_opencv.py at main · robagar/tello-asyncio - to no avail. The video feed is at least 1.5 seconds behind the live action. I should note that I have an extremely beefy PC, so it would seem very unlikely to be a hardware issue. This is the most important issue for me to fix, because I want to apply machine learning on the video feed, and if it is delayed that drastically, then this is impossible to work with. I read that an extender might help, but again, on Android it works, and the drone is right next to the PC, so that's all somewhat weird.
Thanks for your help