I'm new to binary data and getUserMedia, what would be the recommended way to stream getUserMedia (video/audio) to a server and then prepare the stream (video/audio) for other connections?
My initial thought was to send the data over websockets and then write the data to a file (mp4) like this:
getUserMedia -->(websockets)--> server -->(file.mp4)--> video.src
.
I've been looking at MediaStreamRecorder and I can send a buffer like so:
multiStreamRecorder.ondataavailable = function (blobs) {
socket.emit('blobs', blobs)
};
on the server I get
{ audio: <Buffer 52 49 46 46 2c 10 04 00 57 41 56 45 66 6d 74 20 10 00 00 00 01 00 01 00 44 ac 00 00 10 b1 02 00 02 00 10 00 64 61 74 61 00 10 04 00 f8 ff 04 00 0d 00 ... >,
video: <Buffer 1a 45 df a3 40 20 42 86 81 01 42 f7 81 01 42 f2 81 04 42 f3 81 08 42 82 40 04 77 65 62 6d 42 87 81 02 42 85 81 02 18 53 80 67 10 0a 41 54 15 49 a9 66 ... >,
onDataAvailableEventFired: true }
Now I think I should write this to a file, serve that file and then request that file from a video
element's source.. If that's all correct how would I go about writing the file to the filesystem? or am I doing something wrong?
I understand WebRTC has p2p functionality, I'd be serving the video stream to ~50 or more clients so it's not an option.
Update with websocket solution:
I'm now emitting the data back over websockets like so:
socket.on('blobs', function(data){
socket.emit('blobs', data)
})
and on the client side pushing it into a mediaSource
and then a video
element with a timestampOffset
to keep it smooth.
var mediaSource = new MediaSource(),
mediaBuffer,
// init duration of 0 seems fine
duration = 0;
var video = $('video').get(0);
video.src = window.URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', function(e) {
mediaBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vp8"')
mediaBuffer.addEventListener('update', function() {
// wait for mediaBuffer update to fire before setting the new duration
duration = video.duration
});
}, false);
socket.on('blobs', function (data) {
mediaBuffer.timestampOffset = duration;
mediaBuffer.appendBuffer(new Uint8Array(data.video));
});
To make this work properly, you need the server to "speak" WebRTC as well. Using websockets won't give you the desired result and will not be as responsive as WebRTC - it won't be real time.
To get WebRTC running on a server, you can use the WebRTC stack at webrtc.org or OpenWebRTC as a starting point, use GStreamer or go for something a bit more complete. Good projects of media server frameworks for WebRTC to start from are Kurento, Jitsi and Janus.
Due to the nature of your question, my suggestion is to start with one of the more complete media server frameworks mentioned above.