Hello Tello Pilot!
Join our DJI Tello community & remove this banner.
Sign up

Few seconds latency with cv2.videocapture

YOLO

Member
Joined
Sep 19, 2023
Messages
19
Reaction score
2
Location
United Kingdom
Hello,

Knowing that it is an old topic and was discussed a long time ago. But, I did a few hours of searching but with no luck finding the right discussion.

I am trying not to use djitellopy but to capture the video stream by using the cv2.videocapture command. However, there is latency for the video even though I set the buffer to 1. The funny thing I found is that for the first ~50 frames, this command was done within 0.002 seconds, bu it became 0.035 second or more. Could you please give me some hints?

Part of the codes

socket.sendto ('command'.encode (' utf-8 '), tello_address)
socket.sendto ('streamon'.encode (' utf-8 '), tello_address)
print ("Start streaming")
capture = cv2.VideoCapture ('udp://0.0.0.0:11111', cv2.CAP_FFMPEG)
capture.set(cv2.CAP_PROP_BUFFERSIZE, 1)
if not capture.isOpened():
capture.open('udp://0.0.0.0:11111')
while True:
start = time.time()
ret, frame =capture.read()
done = time.time() - start
print(done)

if ret:
cv2.imshow('frame', frame)
if cv2.waitKey (1) & 0xFF == ord ('q'):
break

Thank you very much!

Regards,
YOLO
 
As you show us "part of the code" - what else is done with the captured frames?

If that needs more time than the delta given by the frame rate, the cpatured frame will be held in a (growing buffer) and latency will continously increase. You need to seperate the capturing into a seperate thread and put it into queue. from this queue you only process the latest frame and throw away the rest. You'll find code examples for this in the internet.
 
Hi Hacky,

Yes, I knew that the frame was cumulated, especially in the beginning when the streamon command was initiated. I tried to do this with cv2.videocapture but no luck... But, I use avPY with 'frame flushing', and it is working great. When I applied the same method to cv2.videocapture, I didn't see any improvement, that's why I want to dig into detail to understand more and find the solution.

What I am trying to achieve is to create a program without leveraging DJITELLOPY.

Part of my avPY code,

def tello_video(drone, tello_video_url, handresult):
start_time = time.time()
tell_tello(drone, 'streamon', tello_status[drone])
vs_buffer_size = 4096
container = av.open(tello_video_url, options={'buffer_size': str(vs_buffer_size)})
flush_frame = int((time.time() - start_time) / 0.033) - 1

loop = True
while loop:
for frame in container.decode(video = 0):
if flush_frame > 0:
flush_frame -= 1
continue

image = cv2.flip(np.array(frame.to_image()), 1)
start_time = time.time()
result = hands[drone].process(image)
frame_height = image.shape[0]
frame_width = image.shape[1]
my_hand = []
.
.
MY IMAGE PROCESSING CODES...
.
.
cv2.imshow(f'Tello Video Stream {drone}', image)
cv2.moveWindow(f'Tello Video Stream {drone}', (drone)*900 + 50, 50)
cv2.waitKey(1)
process_time = time.time() - start_time
flush_frame = int(process_time / 0.033)


if handresult.landed:
container.close()
loop = False
break
 

New Posts

Members online

No members online now.

Forum statistics

Threads
5,701
Messages
39,968
Members
17,067
Latest member
atapattu

New Posts