I am looking for an efficient way to do the following:
Using several source videos (of approximately the same length), I need to generate an output video that is composed of all of the original sources each running in its own area (like a bunch of PIPs in several different sizes). So, the end result is that all the original are running side-by-side, each in its own area/box.
The source and output need to be flv
and the platform I am using is Windows (dev on Windows 7 64bit, deployment to Windows server 2008).
I have looked at avisynth but unfortunately it can't handle flv
and non of the plugins and flv splitters I have tried worked.
My current process uses ffmpeg in the following manner:
System.Drawing
namespace to combine each set of frames into a new image, starting with a static background, then loading each frame into an Image
and drawing to the background Graphics
object - this gives me the combined frames.All this is very IO intensive (which is my processing bottleneck at the moment) and I feel there must be a more efficient way to reach my goal. I do not have much experience with video processing, and don't know what options are out there.
Can anyone suggest a more efficient way of processing these?
Do everything inside ffmpeg. You can do a lot of things with video filters. For example to join two videos side by side:
ffmpeg -i input0.avi -vf "movie=input1.avi [in1]; [in]pad=640*2:352[in0]; [in0][in1] overlay=640:0 [out]" out.avi
@Oded: That's basically what this command does. You can remove the pad
filter and change the parameters of overlay
filter to move the second video wherever you like.
ffmpeg -i big.avi -vf "movie=small.avi [small]; [in][small] overlay=10:10 [out]" out.avi
The link I provided describes the filter syntax. You can chain multiple filters together:
ffmpeg -i big.avi -vf "movie=small0.avi [small0]; [in][small0] overlay=10:10 [tmp];\
movie=small1.avi [small1]; [tmp][small1] overlay=30:10 [out]" out.avi