So I am trying to setup adaptive streaming and I have what I think is 80-90% of the problem finished. Right now I ingest my source video, transcode it into 5 resolutions each with 3 bitrates (low, medium, high) and then I split all of those videos into 5 second chunks so that the user can always have the best viewing possible. Now I am at the point where I need to generate / create my MPD manifest file and I am having a hard time figuring out how. Almost everyone just says to use MP4Box but their license is too restrictive for my project.
Is there anyway to generate a MPD file using FFMPEG when I segment my video? or maybe generate a HLS that I can then convert into a MPD manifest? Any info would be awesome, this is my current FFMPEG command (working)
Note the $
variables are inputs into a bash script that set the input file, bitrate and scale.
/usr/bin/ffmpeg \
-re \
-i $1 \
-an \
-c:v libx264 \
-b:v $7 \
-b:a 196k \
-strict -2 \
-movflags faststart \
-pix_fmt yuv420p \
-vf "scale='$4:trunc(ow/a/2)*2'" \
-flags -global_header \
-map 0 \
-f segment \
-segment_time 5 \
-segment_list test.m3u8 \
-segment_format mpegts \
$2%05d.mp4
ffmpeg \
-f webm_dash_manifest -live 1 \
-i /var/www/webm_live/glass_360.hdr \
-f webm_dash_manifest -live 1 \
-i /var/www/webm_live/glass_171.hdr \
-c copy \
-map 0 -map 1 \
-f webm_dash_manifest -live 1 \
-adaptation_sets "id=0,streams=0 id=1,streams=1" \
-chunk_start_index 1 \
-chunk_duration_ms 2000 \
-time_shift_buffer_depth 7200 \
-minimum_update_period 7200 \
/var/www/webm_live/glass_live_manifest.mpd