is it possible broadcast audio with screensharing with WebRTC?
Simple calling getUserMedia
with audio: true
fails by permission denied error.
Is there any workeround which could be used to broadcast audio also?
Will be audio implemented beside screensharing?
Thanks.
Refer this demo: Share screen and audio/video from single peer connection!
Multiple streams are captured and attached to a single peer connection. AFAIK, audio alongwith chromeMediaSource:screen
is "still" not permitted.
Now you can capture audio+screen using single getUserMedia request both on Firefox and Chrome.
However Chrome merely supports audio+tab i.e. you can NOT capture full-screen along with audio.
Audio+Tab means any chrome tab along with microphone.
You can capture both audio and screen streams by making two parallel (UNIQUE) getUserMedia requests.
Now you can use addTrack
method to add audio tracks into screen stream:
var audioStream = captureUsingGetUserMedia();
var screenStream = captureUsingGetUserMedia();
var audioTrack = audioStream.getAudioTracks()[0];
// add audio tracks into screen stream
screenStream.addTrack( audioTrack );
Now screenStream
has both audio and video tracks.
nativeRTCPeerConnection.addStream( screenStream );
nativeRTCPeerConnection.createOffer(success, failure, options);