Hello Tello Pilot!
Join our DJI Tello community & remove this banner.
Sign up

Corrupted frames on model useage (darknet model)

Rom13IL

Member
Joined
Dec 21, 2020
Messages
7
Reaction score
1
Hi, i have been working on a tello interface for the past half a year for my finale year project and tommorow we have presentation for the parents and other students and I really want to do a live performence but I still have 1 major problem relating to the frames while using a Yolov3 object detection model.

frame gets really corrupted.
This only happens when I am using a new model that I started using lately. The frames get really corrupted with frozen pixels and purple pixels in the bottom. The thing is that when I run only the useage of this model on my laptop's camera is works extremly smoothly. The usage of the model itself is made in a thread to prevent the long time it takes to execute from slowing the main code. I really can't stop using this model so I really need help finding a solution.
Also to note, even if i display a copy of the frame that the drone first gives (without rectangles on it and without touching it just copy from when i get the frame and then display) the frame is still corrupted.

again anything could help thank you so much for your time and have a nice day all !

I am attaching a very beautiful picture of me just so you can get the idea of what I mean by corrupted, thank again <3

In case it matters this is my ffmpeg command -

CMD_FFMPEG = (f'ffmpeg -hwaccel auto -hwaccel_device opencl -i pipe:0 '
f'-pix_fmt bgr24 -s {FRAME_X}x{FRAME_Y} -f rawvideo pipe:1 ')

EDIT:
I already did the presentation.. but if anyone knowns how to fix that i would still love to know!!!
 

Attachments

  • ‏‏לכידה.PNG
    ‏‏לכידה.PNG
    925.4 KB · Views: 9
Last edited:
Hi, I have been using YOLOv4 for my software for Tello Drone. As far as I understand, the performance of YOLO algorithms depend on the weights and config packages that you choose. For machines with low specs, it is recommended to use the 'tiny' packages of YOLO.

I'm not sure what might be the problem at your end and it's really difficult to tell without going through your code. When I was building my software, I ran into somewhat similar problem. What happened with me was, my code was taking too long in post-processing each frame(Adding Object Detection and some other AI features) from the Tello Drone. The problem comes when the Tello video frame reader is not in a separate thread and the main programs blocks/delays fetching the next frame of the video.
DJITelloPy library for Python, by default implements video frame reading in a separate non-blocking thread. So using that makes it easier.

frame gets really corrupted.
This only happens when I am using a new model that I started using lately...
Judging from this statement, I can only make a guess that the problem you're facing might be related to the same underlying issue that I faced.
If you've already found the solution, please do post here. I'm also curious what could be causing this issue on your end.

Hope this helps!
Thanks.






I've made a software using Python, JS and the DJITelloPy Library, for Tello drone. It enhances your drone's ability by adding on some of the most sought AI features, like Object Detection, Human Pose Estimation and Voice Commands, to your Tello Drone. You can check it out at www.aidronesoftware.com.
You can download the software for free (limited time offer) using this LINK, after signing up.

Thank you.
 
@AIDrone
Which approach do you use for pose detection in your Drone AI software?

I made use of tf-pose-estimation. It's the human pose estimation algorithm that has been implemented using Tensorflow. It also provides several variants that have some changes to the network structure for realtime processing on the CPU or low-power embedded devices.
The tf-pose-estimation GitHub, shows several experiments with different models as:
  • cmu: the model-based VGG pretrained network described in the original paper with weights in Caffe format converted to be used in TensorFlow.
  • dsconv: same architecture as the cmu version except for the depthwise separable convolution of mobilenet.
  • mobilenet: based on the mobilenet V1 paper, 12 convolutional layers are used as feature-extraction layers.
  • mobilenet v2: similar to mobilenet, but using an improved version of it.
I made use of mobilenet v2.

The results are pretty good as you can see in below video, the drone is quite far
 
@AIDrone Thanks for your explanation.

As far as I found examples for tf-pose-estimation, they are using Tensorflow 1.x.
I'd like to go for TF 2 but examples I found here seem to require conda/anaconda, which I'd like to avoid. Do you have a hint for a TF2 based approach without using conda?

Perhaps we should move the discussion to another thread and not hijack this topic here.
 

Members online

No members online now.

Forum statistics

Threads
5,697
Messages
39,958
Members
17,056
Latest member
97bugsinthecode