WebRTC學習(一)

WebRTC

  • 什么是WebRTC
    音視頻處理+即時通訊的開源庫

  • WebRTC能干什么

    • 音視頻實時互動
    • 游戲、即時通訊、文件傳輸?shù)鹊?/li>
    • 傳輸、音視頻處理(回音消除、降噪等)
  • WebRTC架構(gòu)


    1.png
  • WebRTC源碼目錄結(jié)構(gòu)

api  webrtc接口層,瀏覽器都死通過該接口調(diào)用webrtc
call  數(shù)據(jù)流的管理層,Call代表同一個端點的所有數(shù)據(jù)的流入流出
video  與視頻相關的邏輯
audio  與音頻相關的邏輯
common_audio   音頻算法相關
common_video   視頻算法相關
media  與多媒體相關的邏輯處理,如編解碼的邏輯處理
logging  日志相關
module  重要的目錄,子模塊

pc  Peer Connection ,連接相關的邏輯
p2p   端對端相關代碼, stun turn
rtc_base  基礎代碼,如線程,鎖相關的統(tǒng)一接口代碼
rtc_tool  音視頻分析相關的工具代碼
tool_webrtc  webrtc測試相關的工具代碼,如網(wǎng)絡模擬器
system_wrappers  與操作系統(tǒng)相關的代碼,如CPU特性,原子操作等
stats  存放各種數(shù)據(jù)統(tǒng)計相關的類
sdk  存放Android和ios層代碼。如視頻的采集,渲染等
  • WebRTC Modules目錄
audio_coding   音頻編解碼相關代碼
audio_device   音頻采集與音頻播放相關代碼
audio_mixer    混音相關的代碼
audio_processing  音頻前后處理相關的代碼
bitrate_controller   碼率控制相關代碼
congestion_controller  流控相關的代碼
desktop_capture  桌面采集相關的代碼

pacing  碼率探測及平滑處理相關的代碼
remote_bitrate_estimator  遠端碼率估算相關的代碼
rtp_rtcp  rtp/rtcp協(xié)議相關的代碼
video_capture  視頻采集相關的代碼
video_processing  視頻前后處理相關的代碼

軌與流(Track、MediaStream)

WebRTC重要類

  • MediaStream

  • RTCPeerConnection

  • RTCDataChannel

PeerConnection調(diào)用過程

2.jpg

API相關

  • 獲取音視頻設備(enumerateDevices)

var ePromise = navigator.mediaDevices.enumerateDevices();

- 返回對象內(nèi)部有MediaDevicesInfo
    - deviceId 設備ID
    - label 設備的名字
    - kind  設備的種類
    - groupId  兩個設備groupID相同,說明是同一個物理設備
  • 音頻采集API

let promise=navigator.mediaDevices.getUserMedia(constraints);

  • MediaStreamConstraints

    dictionary MediaStreamConstraints{
        (boolean or MediaTrackConstraints)video=false;
        (boolean or MediaTrackConstraints)audio=false;
    }
    
  • Demo

<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>采集視頻和音頻</title>
</head>

<body>
    <video id="player" autoplay playsinline></video>
</body>

</html>
<script>
    if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
        console.log('navigator.mediaDevices不支持');
    } else {
        //采集視頻和音頻
        const constraints = {
            video: true,
            audio: true
        }
        navigator.mediaDevices.getUserMedia(constraints)
            .then(gotMediaStream)
            .catch(handleError);
    }
    let videoplay = document.querySelector('video#player');

    function gotMediaStream(stream) {
        //stream是采集傳入的流
        videoplay.srcObject = stream;
    }
    function handleError(err) {
        console.log(err);
    }

</script>
  • getUserMedia的不同實現(xiàn)

    • getUserMedia(w3c)
    • webkitGetUserMedia(chrome)
    • mozGetUserMedia(火狐)
  • 自己實現(xiàn)

var getUserMedia=navigator.getUserMedia||
                navigator.webkitGetUserMedia||
                navigator.mozGetUserMedia;
  • 使用google開源適配庫adapter.js

https://webrtc.github.io/adapter/adapter-latest.js

  • Demo
<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>采集視頻和音頻</title>
</head>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>

<body>
    <div>
        <label for="">音頻輸入</label>
        <select id="audioSource"></select>
    </div>
    <div>
        <label for="">音頻輸出</label>
        <select id="audioOutput"></select>
    </div>
    <div>
        <label for="">視頻輸入</label>
        <select id="videoSource"></select>
    </div>
    <video id="player" autoplay playsinline></video>
</body>

</html>
<script>
    if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
        console.log('navigator.mediaDevices不支持');
    } else {
        //采集視頻和音頻
        const constraints = {
            video: true,
            audio: false
        }
        navigator.mediaDevices.getUserMedia(constraints)
            .then(gotMediaStream)
            .then(gotDevice)
            .catch(handleError);
    }
    let videoplay = document.querySelector('video#player');

    function gotMediaStream(stream) {
        //stream是采集傳入的流
        videoplay.srcObject = stream;
        //能拿到流了證明用戶同意訪問音視頻設備了,返回就是一個promise
        return navigator.mediaDevices.enumerateDevices();
    }
    function handleError(err) {
        console.log(err);
    }
    let audioSource = document.querySelector("select#audioSource");
    let audioOutput = document.querySelector("select#audioOutput");
    let videoSource = document.querySelector("select#videoSource");
    function gotDevice(deviceInfos) {
        deviceInfos.forEach(deviceInfo => {
            const { kind } = deviceInfo;
            let option = document.createElement('option');
            option.value = deviceInfo.deviceId;
            option.text = deviceInfo.label;
            if (kind === 'audioinput') {
                audioSource.appendChild(option);
            } else if (kind === 'audiooutput') {
                audioOutput.appendChild(option);
            } else if (kind === 'videoinput') {
                videoSource.appendChild(option);
            }
        });
    }
</script>
  • 視頻約束
width: 寬
height:高
aspectRatio: 比例

frameRate: 幀率(幀率越高畫面越平滑)
facingMode:
    user: 前置攝像頭
    environment: 后置攝像頭
    left: 前置左攝像頭
    right: 前置右攝像頭

resizeMode: 是否裁剪畫面
  • 音頻約束
volume: 音量(0-1.0)
sampleRate: 采樣率
sampleSize: 采樣大小
echoCancellation: true/false 是否開啟或者關閉回音消除
autoGainControl: true/false 是否自動增益(音量增大,當然也是有一定范圍的)
noiseSuppression: true/false 是否開啟降噪

latency: 延遲(延遲越小,實時性越好,但是當網(wǎng)絡不好的時候則可能導致卡頓等等),一般低于500ms已經(jīng)是很好的質(zhì)量了,最好是200ms以內(nèi)
channelCount: 單聲道還是雙聲道,一般單聲道就行了,但是例如音樂直播課則最好開啟雙聲道
deviceID: 當多個設備的時候,可以切換,例如切換攝像頭
groupID: 設備的唯一id

設置最大最小值,則會自動選擇當能能選的最好的效果

{
        audio: true,
        video: {
            width: {
                min: 300,
                max: 640
            },
            height: {
                min: 300,
                max: 480
            },
            frameRate:{
                min:15,
                max:30
            }
        }

}
  • Demo
<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>采集視頻和音頻</title>
</head>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>

<body>
    <div>
        <label for="">音頻輸入</label>
        <select id="audioSource"></select>
    </div>
    <div>
        <label for="">音頻輸出</label>
        <select id="audioOutput"></select>
    </div>
    <div>
        <label for="">視頻輸入</label>
        <select id="videoSource"></select>
    </div>
    <video id="player" autoplay playsinline></video>
</body>

</html>
<script>
    function start() {
        if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
            console.log('navigator.mediaDevices不支持');
            return;
        } else {
            const deviceId = videoSource.value;
            //采集視頻和音頻
            const constraints = {
                video: {
                    width: 320,
                    height: 240,
                    frameRate: 10,
                    facingMode: 'enviroment',
                    deviceId: deviceId ? deviceId : undefined
                },
                audio: {
                    noiseSuppression: true,
                    echoCancellation: true
                }
            }
            navigator.mediaDevices.getUserMedia(constraints)
                .then(gotMediaStream)
                .then(gotDevice)
                .catch(handleError);
        }
    }

    let videoplay = document.querySelector('video#player');

    function gotMediaStream(stream) {
        //stream是采集傳入的流
        videoplay.srcObject = stream;
        //能拿到流了證明用戶同意訪問音視頻設備了,返回就是一個promise
        return navigator.mediaDevices.enumerateDevices();
    }
    function handleError(err) {
        console.log(err);
    }
    let audioSource = document.querySelector("select#audioSource");
    let audioOutput = document.querySelector("select#audioOutput");
    let videoSource = document.querySelector("select#videoSource");
    function gotDevice(deviceInfos) {
        //避免重復添加
        audioSource.innerHTML = "";
        audioOutput.innerHTML = "";
        videoSource.innerHTML = "";
        deviceInfos.forEach(deviceInfo => {
            const { kind } = deviceInfo;
            let option = document.createElement('option');
            option.value = deviceInfo.deviceId;
            option.text = deviceInfo.label;
            if (kind === 'audioinput') {
                audioSource.appendChild(option);
            } else if (kind === 'audiooutput') {
                audioOutput.appendChild(option);
            } else if (kind === 'videoinput') {
                videoSource.appendChild(option);
            }
        });
    }

    start();
    //每次重新選擇視頻輸入都會觸發(fā)start
    videoSource.onchange = start;
</script>

瀏覽器視頻特效

  • CSS filter,-webkit-filter/filter
  • 如何將video與filter關聯(lián)
  • OpenGL/Metal.. 即使使用css filter實際底層還是這些圖形繪制庫

支持的特效種類

grayscale  灰度  
opacity  透明度
sepia  褐色  
brightness  亮度
saturate  飽和度  
contrast  對比度
hue-rotate  色相旋轉(zhuǎn)  
blur  模糊
invert  反色  
drop-shadow  陰影
  • Demo
<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>采集視頻和音頻</title>
</head>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
<style>
    .none{
        -webkit-filter: none;
    }
    .blur{
        -webkit-filter: blur(3px);
    }
    .grayscale{
        -webkit-filter: grayscale(1);
    }
    .invert{
        -webkit-filter: invert(1);
    }
    .sepia{
        -webkit-filter: sepia(1);
    }
</style>
<body>
    <div>
        <label for="">音頻輸入</label>
        <select id="audioSource"></select>
    </div>
    <div>
        <label for="">音頻輸出</label>
        <select id="audioOutput"></select>
    </div>
    <div>
        <label for="">視頻輸入</label>
        <select id="videoSource"></select>
    </div>
    <div>
        <label for="">特效選擇</label>
        <select id="filter">
            <option value="none">None</option>
            <option value="blur">模糊</option>
            <option value="grayscale">灰度</option>
            <option value="invert">反色</option>
            <option value="sepia">褐色</option>
        </select>
    </div>
    <video id="player" autoplay playsinline></video>
</body>

</html>
<script>
    function start() {
        if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
            console.log('navigator.mediaDevices不支持');
            return;
        } else {
            const deviceId = videoSource.value;
            //采集視頻和音頻
            const constraints = {
                video: {
                    width: 320,
                    height: 240,
                    frameRate: 10,
                    facingMode: 'enviroment',
                    deviceId: deviceId ? deviceId : undefined
                },
                audio: {
                    noiseSuppression: true,
                    echoCancellation: true
                }
            }
            navigator.mediaDevices.getUserMedia(constraints)
                .then(gotMediaStream)
                .then(gotDevice)
                .catch(handleError);
        }
    }

    let videoplay = document.querySelector('video#player');

    function gotMediaStream(stream) {
        //stream是采集傳入的流
        videoplay.srcObject = stream;
        //能拿到流了證明用戶同意訪問音視頻設備了,返回就是一個promise
        return navigator.mediaDevices.enumerateDevices();
    }
    function handleError(err) {
        console.log(err);
    }
    let audioSource = document.querySelector("select#audioSource");
    let audioOutput = document.querySelector("select#audioOutput");
    let videoSource = document.querySelector("select#videoSource");
    let filterSelect = document.querySelector("select#filter");
    function gotDevice(deviceInfos) {
        deviceInfos.forEach(deviceInfo => {
            const { kind } = deviceInfo;
            let option = document.createElement('option');
            option.value = deviceInfo.deviceId;
            option.text = deviceInfo.label;
            if (kind === 'audioinput') {
                audioSource.appendChild(option);

            } else if (kind === 'audiooutput') {
                audioOutput.appendChild(option);
            } else if (kind === 'videoinput') {
                videoSource.appendChild(option);
            }
        });
    }

    start();
    //每次重新選擇視頻輸入都會觸發(fā)start
    videoSource.onchange = start;

    filterSelect.onchange=function(){
        videoplay.className=filterSelect.value;
    }
</script>

從視頻中獲取圖像

<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>采集視頻和音頻</title>
</head>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
<style>
    .none{
        -webkit-filter: none;
    }
    .blur{
        -webkit-filter: blur(3px);
    }
    .grayscale{
        -webkit-filter: grayscale(1);
    }
    .invert{
        -webkit-filter: invert(1);
    }
    .sepia{
        -webkit-filter: sepia(1);
    }
</style>
<body>
    <div>
        <label for="">音頻輸入</label>
        <select id="audioSource"></select>
    </div>
    <div>
        <label for="">音頻輸出</label>
        <select id="audioOutput"></select>
    </div>
    <div>
        <label for="">視頻輸入</label>
        <select id="videoSource"></select>
    </div>
    <div>
        <label for="">特效選擇</label>
        <select id="filter">
            <option value="none">None</option>
            <option value="blur">模糊</option>
            <option value="grayscale">灰度</option>
            <option value="invert">反色</option>
            <option value="sepia">褐色</option>
        </select>
    </div>
    <div>
        <button id="snapshot">Take snapshot</button>
    </div>
    <div>
        <canvas id="picture"></canvas>
    </div>
    <video id="player" autoplay playsinline></video>
</body>

</html>
<script>
    function start() {
        if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
            console.log('navigator.mediaDevices不支持');
            return;
        } else {
            const deviceId = videoSource.value;
            //采集視頻和音頻
            const constraints = {
                video: {
                    width: 320,
                    height: 240,
                    frameRate: 10,
                    facingMode: 'enviroment',
                    deviceId: deviceId ? deviceId : undefined
                },
                audio: {
                    noiseSuppression: true,
                    echoCancellation: true
                }
            }
            navigator.mediaDevices.getUserMedia(constraints)
                .then(gotMediaStream)
                .then(gotDevice)
                .catch(handleError);
        }
    }

    let videoplay = document.querySelector('video#player');
    let snapshot=document.querySelector('button#snapshot');
    let picture=document.querySelector('canvas#picture');
    picture.width=320;
    picture.height=240;


    function gotMediaStream(stream) {
        //stream是采集傳入的流
        videoplay.srcObject = stream;
        //能拿到流了證明用戶同意訪問音視頻設備了,返回就是一個promise
        return navigator.mediaDevices.enumerateDevices();
    }
    function handleError(err) {
        console.log(err);
    }
    let audioSource = document.querySelector("select#audioSource");
    let audioOutput = document.querySelector("select#audioOutput");
    let videoSource = document.querySelector("select#videoSource");
    let filterSelect = document.querySelector("select#filter");
    function gotDevice(deviceInfos) {
        deviceInfos.forEach(deviceInfo => {
            const { kind } = deviceInfo;
            let option = document.createElement('option');
            option.value = deviceInfo.deviceId;
            option.text = deviceInfo.label;
            if (kind === 'audioinput') {
                audioSource.appendChild(option);

            } else if (kind === 'audiooutput') {
                audioOutput.appendChild(option);
            } else if (kind === 'videoinput') {
                videoSource.appendChild(option);
            }
        });
    }

    start();
    //每次重新選擇視頻輸入都會觸發(fā)start
    videoSource.onchange = start;
    //特效事件
    filterSelect.onchange=function(){
        videoplay.className=filterSelect.value;
    }
    //截圖事件
    snapshot.onclick=function(){
        //此句可以不要,只是圖片需要濾鏡時候才需要
        // picture.className=filterSelect.value;
        picture.getContext('2d').drawImage(videoplay,0,0,picture.width,picture.height);

    }
</script>

實時聽到聲音

<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>采集視頻和音頻</title>
</head>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
<body>
    <div>
        <label for="">音頻輸入</label>
        <select id="audioSource"></select>
    </div>
    <div>
        <label for="">音頻輸出</label>
        <select id="audioOutput"></select>
    </div>
    <div>
        <label for="">視頻輸入</label>
        <select id="videoSource"></select>
    </div>
    <!-- controls顯示暫停播放的按鈕 -->
    <audio id="audioplayer" controls autoplay></audio>
</body>

</html>
<script>
    function start() {
        if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
            console.log('navigator.mediaDevices不支持');
            return;
        } else {
            const deviceId = videoSource.value;
            //采集視頻和音頻
            const constraints = {
                audio:true
            }
            navigator.mediaDevices.getUserMedia(constraints)
                .then(gotMediaStream)
                .then(gotDevice)
                .catch(handleError);
        }
    }

    let audioplayer = document.querySelector('audio#audioplayer');

    function gotMediaStream(stream) {
        audioplayer.srcObject = stream;
        return navigator.mediaDevices.enumerateDevices();
    }
    function handleError(err) {
        console.log(err);
    }
    let audioSource = document.querySelector("select#audioSource");
    let audioOutput = document.querySelector("select#audioOutput");
    let videoSource = document.querySelector("select#videoSource");
    let filterSelect = document.querySelector("select#filter");
    function gotDevice(deviceInfos) {
        deviceInfos.forEach(deviceInfo => {
            const { kind } = deviceInfo;
            let option = document.createElement('option');
            option.value = deviceInfo.deviceId;
            option.text = deviceInfo.label;
            if (kind === 'audioinput') {
                audioSource.appendChild(option);

            } else if (kind === 'audiooutput') {
                audioOutput.appendChild(option);
            } else if (kind === 'videoinput') {
                videoSource.appendChild(option);
            }
        });
    }

    start();
    //每次重新選擇視頻輸入都會觸發(fā)start
    videoSource.onchange = start;
</script>

MediaStreamAPI

webrtc中有流和軌的概念,一個流中可以有很多軌,音頻媒體軌,視頻媒體軌

函數(shù)

  • MediaStream.addTrack() 添加軌
  • MediaStream.removeTrack() 移除軌
  • MediaStream.getVideoTracks() 獲取所有的視頻軌
  • MediaStream.getAudioTracks() 獲取所有的音頻軌

事件

  • MediaStream.onaddtrack 當添加軌的時候會觸發(fā)
  • MediaStream.onremoveTrack
  • MediaStream.onended 流結(jié)束的事件

獲取視頻約束相關信息(通過流)

<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>采集視頻和音頻</title>
</head>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>

<body>
    <div>
        <label for="">音頻輸入</label>
        <select id="audioSource"></select>
    </div>
    <div>
        <label for="">音頻輸出</label>
        <select id="audioOutput"></select>
    </div>
    <div>
        <label for="">視頻輸入</label>
        <select id="videoSource"></select>
    </div>
    <div>
        <label for="">特效選擇</label>
        <select id="filter">
            <option value="none">None</option>
            <option value="blur">模糊</option>
            <option value="grayscale">灰度</option>
            <option value="invert">反色</option>
            <option value="sepia">褐色</option>
        </select>
    </div>
    <div>
        <canvas id="picture"></canvas>
    </div>
    <table>
        <tr>
           <td>
            <video id="player" autoplay playsinline></video>
           </td>
           <td>
               <div id="constraints"></div>
           </td>
        </tr>
    </table>
</body>

</html>
<script>
    function start() {
        if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
            console.log('navigator.mediaDevices不支持');
            return;
        } else {
            const deviceId = videoSource.value;
            //采集視頻和音頻
            const constraints = {
                video: {
                    width: 320,
                    height: 240,
                    frameRate: 10,
                    facingMode: 'enviroment',
                    deviceId: deviceId ? deviceId : undefined
                },
                audio: {
                    noiseSuppression: true,
                    echoCancellation: true
                }
            }
            navigator.mediaDevices.getUserMedia(constraints)
                .then(gotMediaStream)
                .then(gotDevice)
                .catch(handleError);
        }
    }

    let videoplay = document.querySelector('video#player');
    let divConstraints = document.querySelector('div#constraints');

    function gotMediaStream(stream) {
        videoplay.srcObject = stream;
        let videoTrack=stream.getVideoTracks()[0];
        let videoConstraints =videoTrack.getSettings();
        divConstraints.textContent= JSON.stringify(videoConstraints,null,2);

        return navigator.mediaDevices.enumerateDevices();
    }
    function handleError(err) {
        console.log(err);
    }
    let audioSource = document.querySelector("select#audioSource");
    let audioOutput = document.querySelector("select#audioOutput");
    let videoSource = document.querySelector("select#videoSource");
    function gotDevice(deviceInfos) {
        deviceInfos.forEach(deviceInfo => {
            const { kind } = deviceInfo;
            let option = document.createElement('option');
            option.value = deviceInfo.deviceId;
            option.text = deviceInfo.label;
            if (kind === 'audioinput') {
                audioSource.appendChild(option);

            } else if (kind === 'audiooutput') {
                audioOutput.appendChild(option);
            } else if (kind === 'videoinput') {
                videoSource.appendChild(option);
            }
        });
    }

    start();
    //每次重新選擇視頻輸入都會觸發(fā)start
    videoSource.onchange = start;
</script>

錄制

  1. MediaRecoder

let mediaRecorder=new MediaRecorder(stream,[options]);

參數(shù)說明:
stream  媒體流,可以從getUserMedia、<video>、<audio>或<canvas>獲取
options  限制選項

限制選項說明:
mimeType
    video/webm
    audio/webm
    video/webm;codecs=vp8
    video/webm;codecs=h264
    audio/webm;codecs=opus
    等等,例如webm可以改為mp3/4等,當然不同格式后面支持的編碼格式也不同

audioBitsPerSecond  音頻碼率
videoBitsPerSecond  視頻碼率
bitsPerSecond  整體碼率
  1. API
MediaRecoder.start(timeslice)
開始錄制媒體,timeslice是可選的,如果設置了會按事件切片存儲數(shù)據(jù)
MediaRecoder.stop
停止錄制,此時會觸發(fā)包括最終Blob數(shù)據(jù)的dataavailable事件
MediaRecoder.pause
暫停錄制
MediaRecoder.resume
恢復錄制
MediaRecoder.isTypeSupported()
檢查錄制支持的文件格式(mp4等)
  1. 事件
MediaRecoder.ondataavailable
每次記錄一定時間的數(shù)據(jù)時,(如果沒有指定時間片,則記錄整個數(shù)據(jù)時)會定期觸發(fā),會傳遞一個參數(shù)event,內(nèi)部有data
MediaRecoder.error
當發(fā)生錯粗時候觸發(fā),會自動停止錄制
  1. js集中存儲數(shù)據(jù)的方式
字符串
Blob
ArrayBuffer
ArrayBufferView
  1. 錄制播放下載視頻demo
<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>采集視頻和音頻</title>
</head>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>

<body>
    <div>
        <label for="">音頻輸入</label>
        <select id="audioSource"></select>
    </div>
    <div>
        <label for="">音頻輸出</label>
        <select id="audioOutput"></select>
    </div>
    <div>
        <label for="">視頻輸入</label>
        <select id="videoSource"></select>
    </div>
    <video autoplay playsinline id="player"></video>
    <video playsinline id="recplayer"></video>
    <button id="record">錄制</button>
    <button id="recplay" disabled>播放</button>
    <button id="download" disabled>下載</button>
</body>

</html>
<script>
    function start() {
        if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
            console.log('navigator.mediaDevices不支持');
            return;
        } else {
            const deviceId = videoSource.value;
            //采集視頻和音頻
            const constraints = {
                video: {
                    width: 320,
                    height: 240,
                    frameRate: 10,
                    facingMode: 'enviroment',
                    deviceId: deviceId ? deviceId : undefined
                },
                audio: {
                    noiseSuppression: true,
                    echoCancellation: true
                }
            }
            navigator.mediaDevices.getUserMedia(constraints)
                .then(gotMediaStream)
                .then(gotDevice)
                .catch(handleError);
        }
    }

    let videoplay = document.querySelector('video#player');
    let recplayer = document.querySelector('video#recplayer');
    let btnRecord = document.querySelector('button#record');
    let btnRecplay = document.querySelector('button#recplay');
    let btnDownload = document.querySelector('button#download');
    function gotMediaStream(stream) {
        videoplay.srcObject = stream;
        window.stream = stream;
        return navigator.mediaDevices.enumerateDevices();
    }
    function handleError(err) {
        console.log(err);
    }
    let audioSource = document.querySelector("select#audioSource");
    let audioOutput = document.querySelector("select#audioOutput");
    let videoSource = document.querySelector("select#videoSource");
    let buffer;
    let mediaRecorder;
    function gotDevice(deviceInfos) {
        deviceInfos.forEach(deviceInfo => {
            const { kind } = deviceInfo;
            let option = document.createElement('option');
            option.value = deviceInfo.deviceId;
            option.text = deviceInfo.label;
            if (kind === 'audioinput') {
                audioSource.appendChild(option);

            } else if (kind === 'audiooutput') {
                audioOutput.appendChild(option);
            } else if (kind === 'videoinput') {
                videoSource.appendChild(option);
            }
        });
    }

    start();
    //每次重新選擇視頻輸入都會觸發(fā)start
    videoSource.onchange = start;

    btnRecord.onclick = function () {
        if (btnRecord.textContent === '錄制') {
            startRecord();
            btnRecord.textContent = "停止錄制";
            btnRecplay.disabled = true;
            btnDownload.disabled = true;
        } else {
            stopRecord();
            btnRecord.textContent = "錄制";
            btnRecplay.disabled = false;
            btnDownload.disabled = false;
        }
    }
    //播放錄制視頻的邏輯
    btnRecplay.onclick = function () {
        let blob = new Blob(buffer, { type: 'video/webm' });
        recplayer.src = window.URL.createObjectURL(blob);
        recplayer.srcObject = null;
        recplayer.controls = true;
        recplayer.play();
    }
    //下載錄制的視頻
    btnDownload.onclick = function () {
        let blob = new Blob(buffer, { type: 'video/webm' });
        const url = window.URL.createObjectURL(blob);
        let a = document.createElement('a');
        a.href = url;
        a.style.display='none';//不顯示出來,實現(xiàn)不點擊a標簽就下載
        a.download='aaa.webm';//下載后的名字,此種可以通過瀏覽器打開播放
        a.click();//實現(xiàn)
    }
    function handleonDataAvailable(e) {
        if (e && e.data && e.data.size > 0) {
            buffer.push(e.data);
        }
    }
    //開始錄制
    function startRecord() {
        buffer = [];
        const options = {
            mimeType: 'video/webm;codecs=vp8'
        }
        if (!MediaRecorder.isTypeSupported(options.mimeType)) {
            console.error(`${options.mimeType} is not supported`);
        }
        try {
            mediaRecorder = new MediaRecorder(window.stream, options);
        } catch (error) {
            console.log(error);
            return
        }
        mediaRecorder.ondataavailable = handleonDataAvailable;
        mediaRecorder.start(10);
    }
    //停止錄制
    function stopRecord() {
        mediaRecorder.stop();
    }
</script>

錄制桌面

需要開啟chrome的實驗性功能如下圖


3.jpg
  1. getDisplayMedia

let promise=navigator.mediaDevices.getDisplayMedia(constraints);

  • constraints可選
    constraints中約束與getUserMedia函數(shù)中一致
  1. demo
<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>采集視頻和音頻</title>
</head>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>

<body>
    <div>
        <label for="">音頻輸入</label>
        <select id="audioSource"></select>
    </div>
    <div>
        <label for="">音頻輸出</label>
        <select id="audioOutput"></select>
    </div>
    <div>
        <label for="">視頻輸入</label>
        <select id="videoSource"></select>
    </div>
    <video autoplay playsinline id="player"></video>
    <video playsinline id="recplayer"></video>
    <button id="record">錄制</button>
    <button id="recplay" disabled>播放</button>
    <button id="download" disabled>下載</button>
</body>

</html>
<script>
    function start() {
        if (!navigator.mediaDevices || !navigator.mediaDevices.getDisplayMedia) {
            console.log('navigator.mediaDevices不支持');
            return;
        } else {
            const deviceId = videoSource.value;
            //采集視頻和音頻
            const constraints = {
                video: {
                    frameRate: 10,
                    facingMode: 'enviroment',
                    deviceId: deviceId ? deviceId : undefined
                },
                audio: {
                    noiseSuppression: true,
                    echoCancellation: true
                }
            }
            navigator.mediaDevices.getDisplayMedia(constraints)
                .then(gotMediaStream)
                .then(gotDevice)
                .catch(handleError);
        }
    }

    let videoplay = document.querySelector('video#player');
    let recplayer = document.querySelector('video#recplayer');
    let btnRecord = document.querySelector('button#record');
    let btnRecplay = document.querySelector('button#recplay');
    let btnDownload = document.querySelector('button#download');
    function gotMediaStream(stream) {
        videoplay.srcObject = stream;
        window.stream = stream;
        return navigator.mediaDevices.enumerateDevices();
    }
    function handleError(err) {
        console.log(err);
    }
    let audioSource = document.querySelector("select#audioSource");
    let audioOutput = document.querySelector("select#audioOutput");
    let videoSource = document.querySelector("select#videoSource");
    let buffer;
    let mediaRecorder;
    function gotDevice(deviceInfos) {
        deviceInfos.forEach(deviceInfo => {
            const { kind } = deviceInfo;
            let option = document.createElement('option');
            option.value = deviceInfo.deviceId;
            option.text = deviceInfo.label;
            if (kind === 'audioinput') {
                audioSource.appendChild(option);

            } else if (kind === 'audiooutput') {
                audioOutput.appendChild(option);
            } else if (kind === 'videoinput') {
                videoSource.appendChild(option);
            }
        });
    }

    start();
    //每次重新選擇視頻輸入都會觸發(fā)start
    videoSource.onchange = start;

    btnRecord.onclick = function () {
        if (btnRecord.textContent === '錄制') {
            startRecord();
            btnRecord.textContent = "停止錄制";
            btnRecplay.disabled = true;
            btnDownload.disabled = true;
        } else {
            stopRecord();
            btnRecord.textContent = "錄制";
            btnRecplay.disabled = false;
            btnDownload.disabled = false;
        }
    }
    //播放錄制視頻的邏輯
    btnRecplay.onclick = function () {
        let blob = new Blob(buffer, { type: 'video/webm' });
        recplayer.src = window.URL.createObjectURL(blob);
        recplayer.srcObject = null;
        recplayer.controls = true;
        recplayer.play();
    }
    //下載錄制的視頻
    btnDownload.onclick = function () {
        let blob = new Blob(buffer, { type: 'video/webm' });
        const url = window.URL.createObjectURL(blob);
        let a = document.createElement('a');
        a.href = url;
        a.style.display='none';//不顯示出來,實現(xiàn)不點擊a標簽就下載
        a.download='aaa.webm';//下載后的名字,此種可以通過瀏覽器打開播放
        a.click();//實現(xiàn)
    }
    function handleonDataAvailable(e) {
        if (e && e.data && e.data.size > 0) {
            buffer.push(e.data);
        }
    }
    //開始錄制
    function startRecord() {
        buffer = [];
        const options = {
            mimeType: 'video/webm;codecs=vp8'
        }
        if (!MediaRecorder.isTypeSupported(options.mimeType)) {
            console.error(`${options.mimeType} is not supported`);
        }
        try {
            mediaRecorder = new MediaRecorder(window.stream, options);
        } catch (error) {
            console.log(error);
            return
        }
        mediaRecorder.ondataavailable = handleonDataAvailable;
        mediaRecorder.start(10);
    }
    //停止錄制
    function stopRecord() {
        mediaRecorder.stop();
    }
</script>
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

相關閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容