Hello Tello Pilot!
Join our DJI Tello community & remove this banner.
Sign up

Generating video from hex string/frame - video decoding


New member
Apr 1, 2023
Reaction score
Hi all,

I wanted to make this post because I've been trying to implement the video live-streaming in Rust and have been running into some problem. The problem more so is not particularly with Rust, but rather with the library that I am trying to use: ffmpeg.
More specifically, I am able to process the the packets coming in from the drone and convert them into a hex string something like below:


However, what I'm currently doing is trying to process the frame mentioned above (full frame is not provided) into a video with the following command (keep in mind that the file videoFrame.txt contains the frame mentioned above:)

ffmpeg -i ourVidFrame.txt -f image2pipe -pix_fmt rgb24 -vcodec rawvideo - ./outputs/vid.mp4

When the do the command above what is get is an .mp4, but instead of it being a video, it is instead a 3 second long clip of the frame above being written on a blackscreen like attached.

Not sure if anyone else has had this issue, I've checked the other posts as well about how the video processing is especially for the Android app, but I couldn't find a way to repurpose that more than I already have.

This could be a simple misunderstanding I have with the ffmpeg command or a major gap in my knowledge. Any and all input would be much appreciated, thank you all!

Let me know if I can clarify anything more, or if theres any additional info I can provide, thank you!


  • Screenshot 2023-04-10 at 9.41.31 PM.png
    Screenshot 2023-04-10 at 9.41.31 PM.png
    250.2 KB · Views: 2
If you specify a *.txt file as input, I guess that ffmpeg interprets the file extension and assumes that you want to convert text into images.

You can try to pipe the frames into ffmpeg like described here: FFmpeg FAQ

But also this will not work, when the data is hex-encoded. You would have to decode the hex string into binary data before piping it into ffmpeg.

This raises the question, why you encode the video stream frames into hex strings beforehand?
If you specify a *.txt file as input, I guess that ffmpeg interprets the file extension and assumes that you want to convert text into images.
Thats interesting, but if the .txt file would be interpreted as us wanted to text images, then I believe I'll need to look into what other formats ffmpeg accepts.

You can try to pipe the frames into ffmpeg like described here: FFmpeg FAQ
I have gone through this site, but this yet begs another question as to how I can generate even an image with that hex I have.

But also this will not work, when the data is hex-encoded. You would have to decode the hex string into binary data before piping it into ffmpeg.

This raises the question, why you encode the video stream frames into hex strings beforehand?
The only reason I am converting the received packets (in decimal) to hex is because I wanted to confirm if the particular packets that I was treating as the SPS, PPS, and IFrame were indeed those, and I got the information regarding these hex strings from this GitHub post:
dji-ryze-tello/README.md at master · m6c7l/dji-ryze-tello

Hope that answers the question regarding the hex strings, because if I had an idea of us having to convert into binary then I would've definitely done so.

All that being said, would it make sense then for me to do an intermediary step of creating an image with the frame information instead of trying to convert it into a video directly, and that also begs the question as to how I can achieve that via ffmpeg.

Thanks for taking time to reply Hacky, really appreciate it!
Perhaps it would be easier to help, when we would know more about what you finally want to achieve and why you think, this should be done using Rust.

If you would use Python, it is pretty easy to capture the video stream via OpenCV as shown in this example: DJITelloPy/record-video.py at master · damiafuentes/DJITelloPy

In order to read and save the stream directly in ffmpeg, after the "streamon" command has been sent to tello, you can use for example:
ffmpeg -i udp:// -vcodec libx264 output.mp4
So basically anything you can do with ffmpeg can be done with the videostream from Tello. OpenCV also uses ffmpeg for that purpose in the backend.
Last edited:
Hi Hacky, apologies for the late reply.

The final objective for this project of mine is to get a live video stream working via Rust. And I completely understand that the implementation would be easier if it were done via Python, but implementing via Rust will have its own set of challenges and hence the reason I chose Rust.

I tried using the command you mentioned above, I've seen it before in one of the threads mentioned before (I believe related to the Android video streaming):
 ffmpeg -i udp:// -vcodec libx264 output.mp4

When I tried to use the command above this is a sample of what i got in the CLI:

[libx264 @ 0x7fb25e00b740] frame I:9 Avg QP:19.59 size: 15537
[libx264 @ 0x7fb25e00b740] frame P:463 Avg QP:22.26 size: 7647
[libx264 @ 0x7fb25e00b740] frame B:329 Avg QP:25.76 size: 2157
[libx264 @ 0x7fb25e00b740] consecutive B-frames: 26.0% 52.2% 16.9% 5.0%
[libx264 @ 0x7fb25e00b740] mb I I16..4: 28.0% 64.6% 7.4%
[libx264 @ 0x7fb25e00b740] mb P I16..4: 7.4% 9.9% 0.5% P16..4: 36.5% 6.0% 3.8% 0.0% 0.0% skip:35.8%
[libx264 @ 0x7fb25e00b740] mb B I16..4: 0.2% 0.1% 0.0% B16..8: 46.5% 1.2% 0.2% direct: 0.6% skip:51.1% L0:48.1% L1:48.5% BI: 3.4%
[libx264 @ 0x7fb25e00b740] 8x8 transform intra:56.1% inter:72.1%
[libx264 @ 0x7fb25e00b740] coded y,uvDC,uvAC intra: 15.9% 22.9% 5.6% inter: 10.3% 15.0% 0.8%
[libx264 @ 0x7fb25e00b740] i16 v,h,dc,p: 85% 5% 4% 6%
[libx264 @ 0x7fb25e00b740] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 77% 5% 11% 1% 1% 1% 1% 1% 1%
[libx264 @ 0x7fb25e00b740] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 38% 18% 16% 4% 5% 6% 4% 5% 4%
[libx264 @ 0x7fb25e00b740] i8c dc,h,v,p: 32% 5% 61% 2%
[libx264 @ 0x7fb25e00b740] Weighted P-Frames: Y:0.6% UV:0.2%
[libx264 @ 0x7fb25e00b740] ref P L0: 88.5% 3.5% 6.3% 1.8% 0.0%
[libx264 @ 0x7fb25e00b740] ref B L0: 95.0% 4.6% 0.4%
[libx264 @ 0x7fb25e00b740] ref B L1: 99.7% 0.3%
[libx264 @ 0x7fb25e00b740] kb/s:1096.16

I dont believe that we can simply forward all the information from that port to ffmpeg because the video frames are sent as a single unit, and instead they are sent as PPS, SPS, and the keyframe.

So I believe the better option might be to stitch the frames together send it as an input to ffmpeg, which raises a question as to what format the frames should be in before being passed onto ffmpeg.

Members online

Forum statistics

Latest member