Hello Tello Pilot!
Join our DJI Tello community & remove this banner.
Sign up

Generating video from hex string/frame - video decoding

SirHappi

New member
Joined
Apr 1, 2023
Messages
3
Reaction score
0
Hi all,

I wanted to make this post because I've been trying to implement the video live-streaming in Rust and have been running into some problem. The problem more so is not particularly with Rust, but rather with the library that I am trying to use: ffmpeg.
More specifically, I am able to process the the packets coming in from the drone and convert them into a hex string something like below:

Code:
0000000141f2823bdc674f4164415b7553235f50b2c2acb813221d119b7ad8a9183d3ceac8c63c57af7249d35edbec93e72cebc35c37b6c4b6234248459990d2ac12058be5ea714d6e8aa735e0b45ad9c84a0aed3e1b01f9c146bdfcefa0fd8fa474321e7c3e43c626e8c606b3b43bc29c5ee48f3a4c3d8a0855ef4bb51788e23257fce0b6e15b5cbe10a7ae1ba8436306687cd16d3acc3802c2dffdf744c677434ff69ed1b894b1dc390d63671a8f72e97135fd102152c67ebb41c79389d1fbcbfc3e38898b5ab0cb4f82456e63e0ee0001b52b8c4c58fb1412437f90ef92f1e9dd604be836bb8fd6a25d6e88ebaeee9c47ef3f9a22d8aa93bafe5d45b5977aa975fb767f275f030036d31dba55728c111eaea7fa80cc3a9596a215e62b7bf499bae38ae89bc11315883b3b0e368ce040dca7406632859b787770bfbc9362e0542db4ac33079aa4ad7a321f6712d4e5e3968299c28f7409baf51847404c6c3eb35f360688b9cad76298f331d17f4bec87f6d54d1b85bf4679a23c5e..............
dfacc68439063228805bef622a4c46d223bde4baaf8c113190c9bb4c2d8cc36df6311801685a50121fa6227d6e227c18193731b84db8b5055debb5bfec100115df8695505b3aef6e60ab5b82392212332f5fa6f31fea90826c656609aa74461f07739528f2e78d7db4a8e9fa04d32d5711606e3b620cf1bbbf6ca46a5d44ce6377acb9cc4d4ccc18b6b553307639f12eeb81a8e3f7080d2618aff548dd00564780064bef4cc1e76f3c37526fb2054bc514885c16334aa6fbdd8cdeae492db1ebfd5eafe6144f5828df1074f1fbdef5b2cdb072be29965d121fb1d823a46c9ea59f1ef95dcf8f3d70a88f030ba5c15e3410ab2cd9907e3602048cee7982baebfb673416865273c198a04c3d5c3e5028573c0376a79098b5f043fa76f4416f1040ddddb57017197a3b23ecca7dfd15ebe9334b2e987f13af0a5b20f082eff50ecf663dde1d131eae14ff9aa79e5feb69d0491bf85d55a6f7d9a8fbf05f3f5640f28bad86b41fd4a6745e46e99f59b8e7d9ed544c0eb8bf36bf5c6b5c3d4ff59e5663b6873acbfe99d206aa47117fd861f15ae0a7adfccbd4bcf56614262b787853643e04c9b153689f8b82b93099be66368c2e93036fa2e1b2e53201f50fdd9bb48386f7ca82a4291ee720f2823e4aa6c5a6c30d1d580cf85412b318ed28cbcb94587a21353de28ca5dc7346f4babe9af81813c67e9c278fdb956c6106d982242688eab698b654a4e027fce7158883fa973d0aef18e001962b96664b5c51819a2dfabf806892094ce09d62dabd44517af667d0f20c3bb8ae9dce870b1b054a5291e18f7fd3e39db2054655706237269a14f3bb9e0009db7296c5146bd620f27218e372c8c18bde517d2618832e7b235eb8b75c4c9e06ce46452de7ee6807a3c5f92adc0b76b744d3884be9e8a209a8f668cec71b0d80461e5fb588876ecd55e5d55cab64fcc533bbe89d3d398145f6a10fbfdcc797b74d1bca1a48b052f2a38c37673ff7b78fbf8ed1a7b721457a80

However, what I'm currently doing is trying to process the frame mentioned above (full frame is not provided) into a video with the following command (keep in mind that the file videoFrame.txt contains the frame mentioned above:)

Code:
ffmpeg -i ourVidFrame.txt -f image2pipe -pix_fmt rgb24 -vcodec rawvideo - ./outputs/vid.mp4

When the do the command above what is get is an .mp4, but instead of it being a video, it is instead a 3 second long clip of the frame above being written on a blackscreen like attached.

Not sure if anyone else has had this issue, I've checked the other posts as well about how the video processing is especially for the Android app, but I couldn't find a way to repurpose that more than I already have.

This could be a simple misunderstanding I have with the ffmpeg command or a major gap in my knowledge. Any and all input would be much appreciated, thank you all!

Let me know if I can clarify anything more, or if theres any additional info I can provide, thank you!
 

Attachments

  • Screenshot 2023-04-10 at 9.41.31 PM.png
    Screenshot 2023-04-10 at 9.41.31 PM.png
    250.2 KB · Views: 2
If you specify a *.txt file as input, I guess that ffmpeg interprets the file extension and assumes that you want to convert text into images.

You can try to pipe the frames into ffmpeg like described here: FFmpeg FAQ

But also this will not work, when the data is hex-encoded. You would have to decode the hex string into binary data before piping it into ffmpeg.

This raises the question, why you encode the video stream frames into hex strings beforehand?
 
If you specify a *.txt file as input, I guess that ffmpeg interprets the file extension and assumes that you want to convert text into images.
Thats interesting, but if the .txt file would be interpreted as us wanted to text images, then I believe I'll need to look into what other formats ffmpeg accepts.

You can try to pipe the frames into ffmpeg like described here: FFmpeg FAQ
I have gone through this site, but this yet begs another question as to how I can generate even an image with that hex I have.

But also this will not work, when the data is hex-encoded. You would have to decode the hex string into binary data before piping it into ffmpeg.

This raises the question, why you encode the video stream frames into hex strings beforehand?
The only reason I am converting the received packets (in decimal) to hex is because I wanted to confirm if the particular packets that I was treating as the SPS, PPS, and IFrame were indeed those, and I got the information regarding these hex strings from this GitHub post:
dji-ryze-tello/README.md at master · m6c7l/dji-ryze-tello

Hope that answers the question regarding the hex strings, because if I had an idea of us having to convert into binary then I would've definitely done so.

All that being said, would it make sense then for me to do an intermediary step of creating an image with the frame information instead of trying to convert it into a video directly, and that also begs the question as to how I can achieve that via ffmpeg.

Thanks for taking time to reply Hacky, really appreciate it!
 
Perhaps it would be easier to help, when we would know more about what you finally want to achieve and why you think, this should be done using Rust.

If you would use Python, it is pretty easy to capture the video stream via OpenCV as shown in this example: DJITelloPy/record-video.py at master · damiafuentes/DJITelloPy

In order to read and save the stream directly in ffmpeg, after the "streamon" command has been sent to tello, you can use for example:
ffmpeg -i udp://0.0.0.0:11111 -vcodec libx264 output.mp4
So basically anything you can do with ffmpeg can be done with the videostream from Tello. OpenCV also uses ffmpeg for that purpose in the backend.
 
Last edited:
Hi Hacky, apologies for the late reply.

The final objective for this project of mine is to get a live video stream working via Rust. And I completely understand that the implementation would be easier if it were done via Python, but implementing via Rust will have its own set of challenges and hence the reason I chose Rust.

I tried using the command you mentioned above, I've seen it before in one of the threads mentioned before (I believe related to the Android video streaming):
Code:
 ffmpeg -i udp://0.0.0.0:11111 -vcodec libx264 output.mp4

When I tried to use the command above this is a sample of what i got in the CLI:

[libx264 @ 0x7fb25e00b740] frame I:9 Avg QP:19.59 size: 15537
[libx264 @ 0x7fb25e00b740] frame P:463 Avg QP:22.26 size: 7647
[libx264 @ 0x7fb25e00b740] frame B:329 Avg QP:25.76 size: 2157
[libx264 @ 0x7fb25e00b740] consecutive B-frames: 26.0% 52.2% 16.9% 5.0%
[libx264 @ 0x7fb25e00b740] mb I I16..4: 28.0% 64.6% 7.4%
[libx264 @ 0x7fb25e00b740] mb P I16..4: 7.4% 9.9% 0.5% P16..4: 36.5% 6.0% 3.8% 0.0% 0.0% skip:35.8%
[libx264 @ 0x7fb25e00b740] mb B I16..4: 0.2% 0.1% 0.0% B16..8: 46.5% 1.2% 0.2% direct: 0.6% skip:51.1% L0:48.1% L1:48.5% BI: 3.4%
[libx264 @ 0x7fb25e00b740] 8x8 transform intra:56.1% inter:72.1%
[libx264 @ 0x7fb25e00b740] coded y,uvDC,uvAC intra: 15.9% 22.9% 5.6% inter: 10.3% 15.0% 0.8%
[libx264 @ 0x7fb25e00b740] i16 v,h,dc,p: 85% 5% 4% 6%
[libx264 @ 0x7fb25e00b740] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 77% 5% 11% 1% 1% 1% 1% 1% 1%
[libx264 @ 0x7fb25e00b740] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 38% 18% 16% 4% 5% 6% 4% 5% 4%
[libx264 @ 0x7fb25e00b740] i8c dc,h,v,p: 32% 5% 61% 2%
[libx264 @ 0x7fb25e00b740] Weighted P-Frames: Y:0.6% UV:0.2%
[libx264 @ 0x7fb25e00b740] ref P L0: 88.5% 3.5% 6.3% 1.8% 0.0%
[libx264 @ 0x7fb25e00b740] ref B L0: 95.0% 4.6% 0.4%
[libx264 @ 0x7fb25e00b740] ref B L1: 99.7% 0.3%
[libx264 @ 0x7fb25e00b740] kb/s:1096.16

I dont believe that we can simply forward all the information from that port to ffmpeg because the video frames are sent as a single unit, and instead they are sent as PPS, SPS, and the keyframe.

So I believe the better option might be to stitch the frames together send it as an input to ffmpeg, which raises a question as to what format the frames should be in before being passed onto ffmpeg.
 

New Posts

Members online

No members online now.

Forum statistics

Threads
5,690
Messages
39,934
Members
17,023
Latest member
Repiv

New Posts