How to convert H264 RTP stream from PCAP to a playable video file

yoosha picture yoosha · May 14, 2013 · Viewed 8.7k times · Source

I have captured stream of H264 in PCAP files and trying to create media files from the data. The container is not important (avi,mp4,mkv,…).
When I'm using videosnarf or rtpbreak (combined with python code that adds 00 00 00 01 before each packet) and then ffmpeg, the result is OK only if the input frame rate is constant (or near constant). However, when the input is vfr, the result plays too fast (and on same rare cases too slow).
For example:

videosnarf -i captured.pcap –c
ffmpeg -i H264-media-1.264 output.avi

After doing some investigation of the issue I believe now that since the videosnarf (and rtpbreak) are removing the RTP header from the packets, the timestamp is lost and ffmpeg is referring to the input data as cbr.

  1. I would like to know if there is a way to pass (on a separate file?) the timestamps vector or any other information to ffmpeg so the result will be created correctly?
  2. Is there any other way I can take the data out of the PCAP file and play it or convert it and then play it?
  3. Since all work is done in Python, any suggestion of libraries/modules that can help with the work (even if requires some codding) is welcome as well.

Note: All work is done offline, no limitations on the output. It can be cbr/vbr, any playable container and transcoding. The only "limitation" I have: it should all run on linux…

Thanks Y

Some additional information:
Since the nothing provides the FFMPEG with the timestamp data, i decided to try a different approach: skip videosnarf and use Python code to pipe the packets directly to ffmpeg (using the "-f -i -" options) but then it refuses to accept it unless I provide an SDP file...
How do I provide the SDP file? is it an additional input file? ("-i config.sdp")

The following code is an unsuccessful try doing the above:

import time  
import sys  
import shutil  
import subprocess  
import os  
import dpkt  

if len(sys.argv) < 2:  
    print "argument required!"  
    print "txpcap <pcap file>"  
    sys.exit(2)  
pcap_full_path = sys.argv[1]  

ffmp_cmd = ['ffmpeg','-loglevel','debug','-y','-i','109c.sdp','-f','rtp','-i','-','-na','-vcodec','copy','p.mp4']  

ffmpeg_proc = subprocess.Popen(ffmp_cmd,stdout = subprocess.PIPE,stdin = subprocess.PIPE)  

with open(pcap_full_path, "rb") as pcap_file:  
    pcapReader = dpkt.pcap.Reader(pcap_file)  
    for ts, data in pcapReader:  
        if len(data) < 49:  
            continue  
        ffmpeg_proc.stdin.write(data[42:])

sout, err = ffmpeg_proc.communicate()  
print "stdout ---------------------------------------"  
print sout  
print "stderr ---------------------------------------"  
print err  

In general this will pipe the packets from the PCAP file to the following command:

ffmpeg -loglevel debug -y -i 109c.sdp -f rtp -i - -na -vcodec copy p.mp4

SDP file: [RTP includes dynamic payload type # 109, H264]

v=0
o=- 0 0 IN IP4 ::1
s=No Name
c=IN IP4 ::1
t=0 0
a=tool:libavformat 53.32.100
m=video 0 RTP/AVP 109
a=rtpmap:109 H264/90000
a=fmtp:109 packetization-mode=1;profile-level-id=64000c;sprop-parameter-sets=Z2QADKwkpAeCP6wEQAAAAwBAAAAFI8UKkg==,aMvMsiw=;
b=AS:200

Results:

ffmpeg version 0.10.2 Copyright (c) 2000-2012 the FFmpeg developers
built on Mar 20 2012 04:34:50 with gcc 4.4.6 20110731 (Red Hat 4.4.6-3) configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --enable-shared --enable-runtime-cpudetect --enable-gpl --enable-version3 --enable-postproc --enable-avfilter --enable-pthreads --enable-x11grab --enable-vdpau --disable-avisynth --enable-frei0r --enable-libopencv --enable-libdc1394 --enable-libdirac --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --disable-stripping libavutil 51. 35.100 / 51. 35.100 libavcodec 53. 61.100 / 53. 61.100 libavformat 53. 32.100 / 53. 32.100 libavdevice 53. 4.100 / 53. 4.100
libavfilter 2. 61.100 / 2. 61.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100
libpostproc 52. 0.100 / 52. 0.100 [sdp @ 0x15c0c00] Format sdp probed with size=2048 and score=50 [sdp @ 0x15c0c00] video codec set to: h264 [NULL @ 0x15c7240] RTP Packetization Mode: 1 [NULL @ 0x15c7240] RTP Profile IDC: 64 Profile IOP: 0 Level: c [NULL @ 0x15c7240] Extradata set to 0x15c78e0 (size: 36)!err{or,}_recognition separate: 1; 1 [h264 @ 0x15c7240] err{or,}_recognition combined: 1; 10001 [sdp @ 0x15c0c00] decoding for stream 0 failed [sdp @ 0x15c0c00] Could not find codec parameters (Video: h264) [sdp @ 0x15c0c00] Estimating duration from bitrate, this may be inaccurate
109c.sdp: could not find codec parameters Traceback (most recent call last): File "./ffpipe.py", line 26, in
ffmpeg_proc.stdin.write(data[42:]) IOError: [Errno 32] Broken pipe

(forgive the mass above, the editor keep on complaining about code that is not indented OK ??)

I'm working on this issue for days... any help/suggestion/hint will be appreciated.

Answer

Luke Wahlmeier picture Luke Wahlmeier · Aug 21, 2014

I am pretty sure the only way to (sanely) would be to replay the rtp stream using the networking time between the packets as the delay.

The problem is the variable frame rate, since there is no container around the h264 to tell it X amount of time passed between this frame and the last one it has no idea how to time everything.

If the h264 stream was a constant frame rate you might be able to push rtp data to ffmpeg with out the timings setting the inputs fps, but I dont know of any h264 rtp streams that work like that. What you will most likely see is the video stream play way to fast at some parts and slow at others.