算法101, swift , https://juejin.im/user/599fe9216fb9a0249d616ba8

Swift 音頻 DIY ,Audio Queue Services 搞緩沖,AVAudioEngine 加聲效


播放網(wǎng)絡(luò)音頻,可以先下載好,得到音頻文件,簡單了
使用 AVAudioPlayer 播放,就完
蘋果封裝下,AVAudioPlayer 處理本地文件,很方便

直接拿到一個文件地址 url,播放

簡單機械的理解:
便于音頻的傳輸,一般使用音頻壓縮文件,mp3 等。文件壓的體積小,好傳輸
聲卡是播放 PCM 緩沖的
蘋果幫開發(fā)把壓縮格式,轉(zhuǎn)換為未壓縮的原始文件 PCM, 還幫開發(fā)做播放音頻的資源調(diào)度,從 PCM 文件中拿出一段段的緩沖 buffer,交給聲卡消費掉

( 實際不會分兩步,過程當然是并行的 )

現(xiàn)在手動

本文介紹,直接搞音頻流媒體

接收到網(wǎng)絡(luò)上的音頻數(shù)據(jù)包,就去播放。


68747470733a2f2f63646e2e666173746c6561726e65722e6d656469612f73747265616d65722d6f766572766965772d6469616772616d2e737667.png

分四步:

1,網(wǎng)絡(luò)的音頻文件 >> 下載到本地的音頻 data

下載音頻文件的二進制數(shù)據(jù)
URLSession 建立 task, 去獲取網(wǎng)絡(luò)文件
拿到一個數(shù)據(jù)包 Data,就處理一個
本例子中,一個數(shù)據(jù)包 Data,對應(yīng)一個音頻包 packet, 對應(yīng)一個音頻緩沖 buffer

這一步,比較容易,
建個 URLSessionDataTask ,去下載

要做的,都在網(wǎng)絡(luò)代理方法里


extension Downloader: URLSessionDataDelegate {
// 開始下載,拿到文件的總體積
   public func urlSession(_ session: URLSession, dataTask: URLSessionDataTask, didReceive response: URLResponse, completionHandler: @escaping (URLSession.ResponseDisposition) -> Void) {
       totalBytesCount = response.expectedContentLength
       completionHandler(.allow)
   }

// 接收數(shù)據(jù)
   public func urlSession(_ session: URLSession, dataTask: URLSessionDataTask, didReceive data: Data) {
       // 更新,下載到本地的數(shù)據(jù)總量
       totalBytesReceived += Int64(data.count)
       // 算進度
       progress = Float(totalBytesReceived) / Float(totalBytesCount)
       // data 教給代理,去解析為音頻數(shù)據(jù)包
       delegate?.download(self, didReceiveData: data, progress: progress)
   }
   
   // 下載完成了
   public func urlSession(_ session: URLSession, task: URLSessionTask, didCompleteWithError error: Error?) {
       state = .completed
       delegate?.download(self, completedWithError: error)
   }
}


音頻基礎(chǔ)了解先:

音頻文件,分為封裝格式(文件格式),和編碼格式

音頻數(shù)據(jù)的三個層級,buffer, packet, frame

數(shù)據(jù)緩沖 buffer , 裝音頻包 packet,
音頻包 packet,裝音頻幀 frame

音頻按編碼格式,一般分為可變碼率 ,和固定碼率

固定碼率 CBR, 平均采樣,對應(yīng)原始文件,pcm ( 未壓縮文件 )

可變碼率 VBR,對應(yīng)壓縮文件,例如: mp3
Core Audio 支持 VBR,一般通過可變幀率格式 VFR

VFR 是指:每個包 packet 的體積相等,
包 packet 里面的幀 frame 的數(shù)量不一, 幀 frame 含有的音頻數(shù)據(jù)有大有小

Core Audio 中數(shù)據(jù)描述

固定碼率用 ASBD 描述,AudioStreamBasicDescription
ASBD 的描述, 就是指一些配置信息,包含通道數(shù)、采樣率、位深...

可變碼率中 VFR,用 ASPD 描述,AudioStreamPacketDescription
壓縮音頻數(shù)據(jù)中 VFR,對應(yīng) ASPD
每一個包 Packet,都有其 ASPD

ASPD 里面有,包 packet 的位置信息 mStartOffset
包 packet 的幀 frame 的個數(shù),mVariableFramesInPacket


68747470733a2f2f63646e2e666173746c6561726e65722e6d656469612f71756575652d73657276696365732d6469616772616d2e737667.png

2,音頻 data >> 音頻包 Packet

拿 Audio Queue Services ,處理上一步獲取的音頻二進制數(shù)據(jù) data,解析為音頻數(shù)據(jù)包 packet

2.1 建立音頻的處理通道, 注冊解析回調(diào)方法

public init() throws {
        let context = unsafeBitCast(self, to: UnsafeMutableRawPointer.self)
        // 創(chuàng)建一個活躍的音頻文件流解析器,創(chuàng)建解析器 ID
        guard AudioFileStreamOpen(context, ParserPropertyChangeCallback, ParserPacketCallback, kAudioFileMP3Type, &streamID) == noErr else {
            throw ParserError.streamCouldNotOpen
        }
    }

2.2 傳遞數(shù)據(jù)進來,開始解析

    public func parse(data: Data) throws {
        let streamID = self.streamID!
        let count = data.count
        _ = try data.withUnsafeBytes({ (rawBufferPointer) in
            let bufferPointer = rawBufferPointer.bindMemory(to: UInt8.self)
            if let address = bufferPointer.baseAddress{
                // 把音頻數(shù)據(jù),傳給解析器
                //  streamID,  指定解析器
                let result = AudioFileStreamParseBytes(streamID, UInt32(count), address, [])
                guard result == noErr else {
                    throw ParserError.failedToParseBytes(result)
                }
            }
        })
    }

2.3 音頻信息解析先

func ParserPropertyChangeCallback(_ context: UnsafeMutableRawPointer, _ streamID: AudioFileStreamID, _ propertyID: AudioFileStreamPropertyID, _ flags: UnsafeMutablePointer<AudioFileStreamPropertyFlags>) {
    let parser = Unmanaged<Parser>.fromOpaque(context).takeUnretainedValue()
    // 關(guān)心什么信息,取什么
    switch propertyID {
    case kAudioFileStreamProperty_DataFormat:
        // 拿數(shù)據(jù)格式
        var format = AudioStreamBasicDescription()
        GetPropertyValue(&format, streamID, propertyID)
        parser.dataFormat = AVAudioFormat(streamDescription: &format)

    case kAudioFileStreamProperty_AudioDataPacketCount:
         // 音頻流文件,分離出來的音頻數(shù)據(jù)中,的包 packet 個數(shù)
        GetPropertyValue(&parser.packetCount, streamID, propertyID)

    default:
         () 
    }
}

// 套路就是,先拿內(nèi)存大小 propSize, 再拿關(guān)心的屬性的值 value
func GetPropertyValue<T>(_ value: inout T, _ streamID: AudioFileStreamID, _ propertyID: AudioFileStreamPropertyID) {
    var propSize: UInt32 = 0
    guard AudioFileStreamGetPropertyInfo(streamID, propertyID, &propSize, nil) == noErr else {
        return
    }
    guard AudioFileStreamGetProperty(streamID, propertyID, &propSize, &value) == noErr else {
        return
    }
}

2.4 解析回調(diào),處理數(shù)據(jù)

func ParserPacketCallback(_ context: UnsafeMutableRawPointer, _ byteCount: UInt32, _ packetCount: UInt32, _ data: UnsafeRawPointer, _ packetDescriptions: UnsafeMutablePointer<AudioStreamPacketDescription>) {

    // 拿回了 self ( parser )
    let parser = Unmanaged<Parser>.fromOpaque(context).takeUnretainedValue()
    let packetDescriptionsOrNil: UnsafeMutablePointer<AudioStreamPacketDescription>? = packetDescriptions
    // ASPD 存在,就是壓縮的音頻包
    // 未壓縮的 pcm, 使用 ASBD
    let isCompressed = packetDescriptionsOrNil != nil
    guard let dataFormat = parser.dataFormat else {
        return
    }
    
    // 拿到了數(shù)據(jù),遍歷,
    // 存儲進去 parser.packets, 也就是 self.packets
    if isCompressed {
        for i in 0 ..< Int(packetCount) {
            // 壓縮音頻數(shù)據(jù),每一個包對應(yīng) ASPD, 逐個計算
            let packetDescription = packetDescriptions[i]
            let packetStart = Int(packetDescription.mStartOffset)
            let packetSize = Int(packetDescription.mDataByteSize)
            let packetData = Data(bytes: data.advanced(by: packetStart), count: packetSize)
            parser.packets.append((packetData, packetDescription))
        }
    } else {
         // 原始音頻數(shù)據(jù) pcm,文件統(tǒng)一配置,計算比較簡單
        let format = dataFormat.streamDescription.pointee
        let bytesPerPacket = Int(format.mBytesPerPacket)
        for i in 0 ..< Int(packetCount) {
            let packetStart = i * bytesPerPacket
            let packetSize = bytesPerPacket
            let packetData = Data(bytes: data.advanced(by: packetStart), count: packetSize)
            parser.packets.append((packetData, nil))
        }
    }
}

3,音頻包 packet >> 音頻緩沖 buffer

public required init(parser: Parsing, readFormat: AVAudioFormat) throws {
        // 從之前負責(zé)解析的,拿音頻數(shù)據(jù)
        self.parser = parser
        
        guard let dataFormat = parser.dataFormat else {
            throw ReaderError.parserMissingDataFormat
        }

        let sourceFormat = dataFormat.streamDescription
        let commonFormat = readFormat.streamDescription
        // 創(chuàng)建音頻格式轉(zhuǎn)換器 converter
        // 通過指定輸入格式,和輸出格式
        // 輸入格式是上一步解析出來的,從 paser 里面拿
        // 輸出格式,開發(fā)指定的
        let result = AudioConverterNew(sourceFormat, commonFormat, &converter)
        guard result == noErr else {
            throw ReaderError.unableToCreateConverter(result)
        }
        self.readFormat = readFormat
    }
    

開發(fā)指定的輸出格式

public var readFormat: AVAudioFormat {
        return AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 2, interleaved: false)!
    }

// 位深,采用 Float32
// 采樣率 44100 Hz, 標準 CD 音質(zhì)
// 分左右聲道

上一步解析出音頻包 packet 后,進入讀取音頻緩沖 buffer 的階段

    
    public func read(_ frames: AVAudioFrameCount) throws -> AVAudioPCMBuffer {
        let framesPerPacket = readFormat.streamDescription.pointee.mFramesPerPacket
        var packets = frames / framesPerPacket
        
       // 創(chuàng)建空白的、指定格式和容量的,音頻緩沖 AVAudioPCMBuffer
        guard let buffer = AVAudioPCMBuffer(pcmFormat: readFormat, frameCapacity: frames) else {
            throw ReaderError.failedToCreatePCMBuffer
        }
        buffer.frameLength = frames
        
        // 把解析出的音頻包 packet, 轉(zhuǎn)換成 AVAudioPCMBuffer,這樣 AVAudioEngine 可以拿來播放
        try queue.sync {
            let context = unsafeBitCast(self, to: UnsafeMutableRawPointer.self)
            // 通過設(shè)置好的轉(zhuǎn)換器 converter,使用回調(diào)方法 ReaderConverterCallback,填充創(chuàng)建的 buffer 的數(shù)據(jù) buffer.mutableAudioBufferList 
            let status = AudioConverterFillComplexBuffer(converter!, ReaderConverterCallback, context, &packets, buffer.mutableAudioBufferList, nil)
            guard status == noErr else {
                switch status {
                case ReaderMissingSourceFormatError:
                    throw ReaderError.parserMissingDataFormat
                case ReaderReachedEndOfDataError:
                    throw ReaderError.reachedEndOfFile
                case ReaderNotEnoughDataError:
                    throw ReaderError.notEnoughData
                default:
                    throw ReaderError.converterFailed(status)
                }
            }
        }
        return buffer
    }


  • AudioConverterFillComplexBuffer 的使用姿勢:

AudioConverterFillComplexBuffer(格式轉(zhuǎn)換器,回調(diào)函數(shù),自定義參數(shù)指針,包的個數(shù)指針,接收轉(zhuǎn)換后數(shù)據(jù)的指針,接收 ASPD 的指針)

AudioConverterFillComplexBuffer(converter!, ReaderConverterCallback, context, &packets, buffer.mutableAudioBufferList, nil)
  • AudioConverterFillComplexBuffer 的回調(diào)函數(shù) ReaderConverterCallback, 的使用姿勢:
    回調(diào)函數(shù)(格式轉(zhuǎn)換器, 包的個數(shù)指針,接收轉(zhuǎn)換后數(shù)據(jù)的指針, 接收 ASPD 的指針, 自定義參數(shù)指針 )

可看出,傳遞給 AudioConverterFillComplexBuffer 的 6 個參數(shù),
除了其回調(diào)參數(shù)本身,其他 5 個參數(shù),其回調(diào)參數(shù)都有用到


轉(zhuǎn)換 buffer 的回調(diào)函數(shù),之前創(chuàng)建了空白的音頻緩沖 buffer,現(xiàn)在往 buffer 里面,填入數(shù)據(jù)

func ReaderConverterCallback(_ converter: AudioConverterRef,
                             _ packetCount: UnsafeMutablePointer<UInt32>,
                             _ ioData: UnsafeMutablePointer<AudioBufferList>,
                             _ outPacketDescriptions: UnsafeMutablePointer<UnsafeMutablePointer<AudioStreamPacketDescription>?>?,
                             _ context: UnsafeMutableRawPointer?) -> OSStatus {

    // 還原出 self ( reader )
    let reader = Unmanaged<Reader>.fromOpaque(context!).takeUnretainedValue()
    
    // 確保輸入格式可用
    guard let sourceFormat = reader.parser.dataFormat else {
        return ReaderMissingSourceFormatError
    }
    
    //  這個類 Reader, 里面記錄了一個播放到的位置 currentPacket, 
    //  播放相對位置,就是一個 offset
    //   判斷播放到包尾的情況
     
    //     播放到包尾,根據(jù)下載解析情況,分兩種情況
    //     1, 下載解析完成,播放到了結(jié)尾
    //     2, 下載沒完成,解析好了的,都播放完了
    //     (僅此兩種狀況,因為解析的時間,遠比不上下載的時間。下載完成 = 解析完成 )
    let packetIndex = Int(reader.currentPacket)
    let packets = reader.parser.packets
    let isEndOfData = packetIndex >= packets.count - 1
    if isEndOfData {
        if reader.parser.isParsingComplete {
            packetCount.pointee = 0
            return ReaderReachedEndOfDataError
        } else {
            return ReaderNotEnoughDataError
        }
    }
    
    // 之前的設(shè)置,一次只處理一個包 packet 的音頻數(shù)據(jù)
    let packet = packets[packetIndex]
    var data = packet.0
    let dataCount = data.count
    ioData.pointee.mNumberBuffers = 1
    // 音頻數(shù)據(jù)拷貝過來:先分配內(nèi)存,再拷貝地址的數(shù)據(jù)
    ioData.pointee.mBuffers.mData = UnsafeMutableRawPointer.allocate(byteCount: dataCount, alignment: 0)

    _ = data.withUnsafeMutableBytes { (rawMutableBufferPointer) in
        let bufferPointer = rawMutableBufferPointer.bindMemory(to: UInt8.self)
        if let address = bufferPointer.baseAddress{
            memcpy((ioData.pointee.mBuffers.mData?.assumingMemoryBound(to: UInt8.self))!, address, dataCount)
        }
    }
    
    ioData.pointee.mBuffers.mDataByteSize = UInt32(dataCount)
    
    // 處理壓縮文件 MP3, AAC 的 ASPD
    let sourceFormatDescription = sourceFormat.streamDescription.pointee
    if sourceFormatDescription.mFormatID != kAudioFormatLinearPCM {
        if outPacketDescriptions?.pointee == nil {
            outPacketDescriptions?.pointee = UnsafeMutablePointer<AudioStreamPacketDescription>.allocate(capacity: 1)
        }
        outPacketDescriptions?.pointee?.pointee.mDataByteSize = UInt32(dataCount)
        outPacketDescriptions?.pointee?.pointee.mStartOffset = 0
        outPacketDescriptions?.pointee?.pointee.mVariableFramesInPacket = 0
    }
    packetCount.pointee = 1

    // 更新播放到的位置 currentPacket
    reader.currentPacket = reader.currentPacket + 1
    
    return noErr;
}



68747470733a2f2f63646e2e666173746c6561726e65722e6d656469612f6176617564696f656e67696e652d707573682e737667 (1).png

4, 使用 AVAudioEngine, 播放與實時音效處理

AVAudioEngine 可以做實時的音效處理,用 Effect Unit 加效果

4.1 播放先

設(shè)置 AudioEngine,添加節(jié)點,連接節(jié)點

func setupAudioEngine(){
        // 添加節(jié)點
        attachNodes()

        // 連接節(jié)點
        connectNodes()

        // 準備 AudioEngine
        engine.prepare()
        
        // AVAudioEngine 的數(shù)據(jù)流,采用推 push 模型
        // 使用計時器,每隔 0.1 秒左右,調(diào)度播放資源

        let interval = 1 / (readFormat.sampleRate / Double(readBufferSize))
        let timer = Timer(timeInterval: interval / 2, repeats: true) {
            [weak self] _ in
            guard self?.state != .stopped else {
                return
            }
            // 分配緩沖 buffer, 調(diào)度播放資源
            self?.scheduleNextBuffer()
            self?.handleTimeUpdate()
            self?.notifyTimeUpdated()
        }
        RunLoop.current.add(timer, forMode: .common)
    }

    // 添加播放節(jié)點
    open func attachNodes() {
        engine.attach(playerNode)
    }

    // 播放節(jié)點,連通到輸出
    open func connectNodes() {
        engine.connect(playerNode, to: engine.mainMixerNode, format: readFormat)
    }

調(diào)度播放資源,將數(shù)據(jù) ( 上一步創(chuàng)建的音頻緩沖 buffer )交給 AudioEngine 的播放節(jié)點 playerNode

func scheduleNextBuffer(){
        guard let reader = reader else {
            return
        }
        //  通過狀態(tài)記錄,管理播放
        // 播放狀態(tài),就是一個開關(guān)
        guard !isFileSchedulingComplete || repeats else {
            return
        }

        do {
            // 拿到,上一步創(chuàng)建音頻緩沖 buffer
            let nextScheduledBuffer = try reader.read(readBufferSize)
            // playerNode 播放消費掉
            playerNode.scheduleBuffer(nextScheduledBuffer)
        } catch ReaderError.reachedEndOfFile {
            isFileSchedulingComplete = true
        } catch {  }
    }

開啟播放

public func play() {
        // 沒播放,才開啟
        guard !playerNode.isPlaying else {
            return
        }
        
        if !engine.isRunning {
            do {
                try engine.start()
            } catch { }
        }
        
        // 提升用戶體驗,播放前,先靜音
        let lastVolume = volumeRampTargetValue ?? volume
        volume = 0
        
        //  播放節(jié)點播放
        playerNode.play()
        
        // 250 毫秒后,正常音量播放
        swellVolume(to: lastVolume)
        
        // 更新播放狀態(tài)
        state = .playing
    }

4.2 音效后

添加實時的音高、播放速度效果

   // 使用 AVAudioUnitTimePitch 單元,調(diào)節(jié)播放速度和音高效果
    let timePitchNode = AVAudioUnitTimePitch()
    

    override func attachNodes() {
        // 添加播放節(jié)點
        super.attachNodes()
        // 添加音效節(jié)點
        engine.attach(timePitchNode)
    }
    
// 相當于在播放節(jié)點和輸出節(jié)點中間,插入音效節(jié)點
    override func connectNodes() {
        engine.connect(playerNode, to: timePitchNode, format: readFormat)
        engine.connect(timePitchNode, to: engine.mainMixerNode, format: readFormat)
    }


補充細節(jié)

5,計算出歌曲的時長, duration

先拿到包的個數(shù),
下載的數(shù)據(jù),解析完成后,加出來的

1 首 2:34 秒的 mp3, 可分為 5925 個包

public var totalPacketCount: AVAudioPacketCount? {
        guard let _ = dataFormat else {
            return nil
        }
        // 本例子,走的是 AVAudioPacketCount(packets.count)
        // 2.4 的解析回調(diào) ParserPacketCallback 中,
        // 拿到步驟 1 下載的數(shù)據(jù)后,就解析,添加數(shù)據(jù)到 packets
        return max(AVAudioPacketCount(packetCount), AVAudioPacketCount(packets.count))
    }

去拿音頻幀 frame 的總數(shù)

public var totalFrameCount: AVAudioFrameCount? {
        guard let framesPerPacket = dataFormat?.streamDescription.pointee.mFramesPerPacket else {
            return nil
        }
        
        guard let totalPacketCount = totalPacketCount else {
            return nil
        }
        // 上一步包的總數(shù) X 每個包里有幾個幀
        return AVAudioFrameCount(totalPacketCount) * AVAudioFrameCount(framesPerPacket)
    }

計算出音頻持續(xù)時間

public var duration: TimeInterval? {
        guard let sampleRate = dataFormat?.sampleRate else {
            return nil
        }
        
        guard let totalFrameCount = totalFrameCount else {
            return nil
        }
        // 上一步的音頻幀 frame 的總數(shù) / 采樣率
        return TimeInterval(totalFrameCount) / TimeInterval(sampleRate)
    }

6,調(diào)節(jié)播放的當前位置

6.1 音頻管理者 streamer 里面
    public func seek(to time: TimeInterval) throws {        
        // 有了 parser 的音頻包,和 reader 的音頻緩沖,才可播放
        guard let parser = parser, let reader = reader else {
            return
        }
        
        // 拿時間,先算出音頻幀的相對位置
        // 拿音頻幀的相對位置,算出音頻包的相對位置
        guard let frameOffset = parser.frameOffset(forTime: time),
            let packetOffset = parser.packetOffset(forFrame: frameOffset) else {
                return
        }
        // 更新當前狀態(tài)
        currentTimeOffset = time
        isFileSchedulingComplete = false
        
        // 記錄當前狀態(tài),一會恢復(fù)
        let isPlaying = playerNode.isPlaying
        let lastVolume = volumeRampTargetValue ?? volume
        
        // 優(yōu)化體驗,避免雜聲,播放先停下來
        playerNode.stop()
        volume = 0
        
        // 更新 reader 里面的播放資源位置
        do {
            try reader.seek(packetOffset)
        } catch {
            return
        }
        
        // 剛才記錄當前狀態(tài),恢復(fù)
        if isPlaying {
            playerNode.play()
        }
        
        // 更新 UI
        delegate?.streamer(self, updatedCurrentTime: time)
        
        // 恢復(fù)原來的音量
        swellVolume(to: lastVolume)
    }

算出當前時間的,幀偏移

   public func frameOffset(forTime time: TimeInterval) -> AVAudioFramePosition? {
        guard let _ = dataFormat?.streamDescription.pointee,
            let frameCount = totalFrameCount,
            let duration = duration else {
                return nil
        }
        //  拿當前時間 / 音頻總時長,算出比值
        let ratio = time / duration
        return AVAudioFramePosition(Double(frameCount) * ratio)
    }
算出當前幀,對應(yīng)的包的位置
    public func packetOffset(forFrame frame: AVAudioFramePosition) -> AVAudioPacketCount? {
        guard let framesPerPacket = dataFormat?.streamDescription.pointee.mFramesPerPacket else {
            return nil
        }
        // 當前是第多少幀 / 一個包里面有幾個幀
        return AVAudioPacketCount(frame) / AVAudioPacketCount(framesPerPacket)
    }
6.2 音頻資源調(diào)度 reader 里面
public func seek(_ packet: AVAudioPacketCount) throws {
        queue.sync {
            // 更改位置偏移
            currentPacket = packet
        }
    }

記錄的位置 currentPacket,這樣起作用
步驟三的回調(diào) ReaderConverterCallback,

    // ...
    // 本例子中,一個音頻包 packet, 對應(yīng)一個音頻緩沖 buffer
    let packet = packets[packetIndex]
    var data = packet.0
    // ...
    _ = data.withUnsafeMutableBytes { (rawMutableBufferPointer) in // ...
   }
   // ...


Screen Shot 2020-02-23 at 11.57.10 PM.png

7 UI 用戶體驗提升,手動拖拽播放時刻的場景

分三個事件處理:手指按下,手指拖動,手指抬起

//  手指按下, 屏蔽刷新播放進度的代理方法
@IBAction func progressSliderTouchedDown(_ sender: UISlider) {
        isSeeking = true
    }

    //  手指拖動, 屏蔽了刷新播放進度的代理方法,采用手勢對應(yīng)的 UI
    @IBAction func progressSliderValueChanged(_ sender: UISlider) {
        let currentTime = TimeInterval(progressSlider.value)
        currentTimeLabel.text = currentTime.toMMSS()
    }

//  手指抬起, 恢復(fù)刷新播放進度的代理方法,這個時候才調(diào)度播放的資源
@IBAction func progressSliderTouchedUp(_ sender: UISlider) {
        seek(sender)
        isSeeking = false
    }

相關(guān)代理方法,根據(jù)播放進度,更新當前事件和進度條的 UI

func streamer(_ streamer: Streaming, updatedCurrentTime currentTime: TimeInterval) {
        if !isSeeking {
            progressSlider.value = Float(currentTime)
            currentTimeLabel.text = currentTime.toMMSS()
        }
    }

8 單曲循環(huán)模式

步驟 4 播放中,分發(fā)播放資源,是走計時器的

管理下里面的兩個方法的邏輯,就好
( 調(diào)度音頻緩沖,和播放完了改狀態(tài) )

let timer = Timer(timeInterval: interval / 2, repeats: true) {
            [weak self] _ in
            // ...
            self?.scheduleNextBuffer()
            self?.handleTimeUpdate()
            // ...
        }

調(diào)度音頻緩沖 buffer,


func scheduleNextBuffer(){
        guard let reader = reader else {
            return
        }
        // 如果重復(fù) repeats,就繼續(xù)播放,不用管播放完了一遍沒有
        guard !isFileSchedulingComplete || repeats else {
            return
        }
       // ...   下面是,播放節(jié)點播放資源
}

根據(jù)播放情況,處理相關(guān)狀態(tài)

func handleTimeUpdate(){
        guard let currentTime = currentTime, let duration = duration else {
            return
        }
        // 當前播放的時間,過了音頻時長,就認為播放完了一遍,去暫停
        if currentTime >= duration {
            try? seek(to: 0)
            // 如果重復(fù),別暫停
            if !repeats{
                pause()
            }
        }
    }

代碼見 github




實例分析:RxSwift 速成的三個原則

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

友情鏈接更多精彩內(nèi)容