WebRTC采用UDP传输流媒体数据,不可避免存在丢包情况。WebRTC主要采用FEC(Forward Error Correction,前向纠错)以及NACK(negative-acknowledge character,否定应答)对抗网络丢包。对于NACK,遇到丢包了才通知发送端重传对应数据包,但不是所有情况下某个包丢了就一定重传该包,有些场景下,重传该包会带来其它问题,例如增大延时,缓存过大,同时也可能发送端没有该数据包缓存,导致无法重传,此时会放弃重传该包。由于关键帧可以单独解码出图像,不参考前后视频帧,所以会采取请求关键帧这种更便捷的方式替代重传该数据包,使解码端能立刻刷新出新图像,避免丢包过多,长时间等待重传数据包导致的画面停顿问题,以及获取不到重传包导致后续数据解码花屏问题。
关键帧请求场景
在WebRTC中,有很多情况需要请求关键帧。例如下面这几种情况:
1)解码H264时无法获取sps,pps,导致无法解码,此时就需要请求获取关键帧,在modules/video_coding模块目录,video_coding工程下,相关处理代码如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
H264SpsPpsTracker::PacketAction H264SpsPpsTracker::CopyAndFixBitstream( VCMPacket* packet) { RTC_DCHECK(packet->codec() == kVideoCodecH264); const uint8_t* data = packet->dataPtr; const size_t data_size = packet->sizeBytes; const RTPVideoHeader& video_header = packet->video_header; auto& h264_header = absl::get<RTPVideoHeaderH264>(packet->video_header.video_type_header); bool append_sps_pps = false; auto sps = sps_data_.end(); auto pps = pps_data_.end(); for (size_t i = 0; i < h264_header.nalus_length; ++i) { const NaluInfo& nalu = h264_header.nalus[i]; switch (nalu.type) { case H264::NaluType::kSps: { sps_data_[nalu.sps_id].width = packet->width(); sps_data_[nalu.sps_id].height = packet->height(); break; } case H264::NaluType::kPps: { pps_data_[nalu.pps_id].sps_id = nalu.sps_id; break; } case H264::NaluType::kIdr: { // If this is the first packet of an IDR, make sure we have the required // SPS/PPS and also calculate how much extra space we need in the buffer // to prepend the SPS/PPS to the bitstream with start codes. if (video_header.is_first_packet_in_frame) { if (nalu.pps_id == -1) { RTC_LOG(LS_WARNING) << "No PPS id in IDR nalu."; return kRequestKeyframe; } pps = pps_data_.find(nalu.pps_id); if (pps == pps_data_.end()) { RTC_LOG(LS_WARNING) << "No PPS with id << " << nalu.pps_id << " received"; return kRequestKeyframe; } sps = sps_data_.find(pps->second.sps_id); if (sps == sps_data_.end()) { RTC_LOG(LS_WARNING) << "No SPS with id << " << pps->second.sps_id << " received"; return kRequestKeyframe; } ..... } |
2)丢失的包太多,若都一一重传,将造成延时增大(等帧数据完整了才会去解码渲染),此时新来的数据也只能一直缓存,所以jitterbuffer大小也会不断增大,此时不如直接请求发送一个关键帧来得实际,以前丢的那些包都不管了,由于关键帧可以单独解码,所以不会造成解码端花屏马赛克现象。但是由于前面那些视频帧都丢弃了,此时生成的关键帧会与之前播放的视频存在不连贯性,所以画面变化大时会有轻微卡顿现象,相当于跳帧了。modules/video_coding模块目录,nack_module工程下相关处理代码如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
void NackModule::AddPacketsToNack(uint16_t seq_num_start, uint16_t seq_num_end) { // Remove old packets. auto it = nack_list_.lower_bound(seq_num_end - kMaxPacketAge); nack_list_.erase(nack_list_.begin(), it); // If the nack list is too large, remove packets from the nack list until // the latest first packet of a keyframe. If the list is still too large, // clear it and request a keyframe. uint16_t num_new_nacks = ForwardDiff(seq_num_start, seq_num_end); if (nack_list_.size() + num_new_nacks > kMaxNackPackets) { while (RemovePacketsUntilKeyFrame() && nack_list_.size() + num_new_nacks > kMaxNackPackets) { } if (nack_list_.size() + num_new_nacks > kMaxNackPackets) { nack_list_.clear(); RTC_LOG(LS_WARNING) << "NACK list full, clearing NACK" " list and requesting keyframe."; keyframe_request_sender_->RequestKeyFrame(); return; } } for (uint16_t seq_num = seq_num_start; seq_num != seq_num_end; ++seq_num) { // Do not send nack for packets that are already recovered by FEC or RTX if (recovered_list_.find(seq_num) != recovered_list_.end()) continue; NackInfo nack_info(seq_num, seq_num + WaitNumberOfPackets(0.5), clock_->TimeInMilliseconds()); RTC_DCHECK(nack_list_.find(seq_num) == nack_list_.end()); nack_list_[seq_num] = nack_info; } } |
上面代码中,要重传的包数量nack_list_.size()
在进行RemovePacketsUntilKeyFrame()
操作后若还超过规定大小,就开始清空要重传的数据包列表:nack_list_.clear()
,然后请求关键帧。
3)丢失的包太旧,此时发送端不一定有该数据包的缓存。在modules/video_coding模块目录,video_coding工程下,相关处理代码如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
bool VCMJitterBuffer::UpdateNackList(uint16_t sequence_number) { if (nack_mode_ == kNoNack) { return true; } // Make sure we don't add packets which are already too old to be decoded. if (!last_decoded_state_.in_initial_state()) { latest_received_sequence_number_ = LatestSequenceNumber( latest_received_sequence_number_, last_decoded_state_.sequence_num()); } if (IsNewerSequenceNumber(sequence_number, latest_received_sequence_number_)) { // Push any missing sequence numbers to the NACK list. for (uint16_t i = latest_received_sequence_number_ + 1; IsNewerSequenceNumber(sequence_number, i); ++i) { missing_sequence_numbers_.insert(missing_sequence_numbers_.end(), i); } if (TooLargeNackList() && !HandleTooLargeNackList()) { RTC_LOG(LS_WARNING) << "Requesting key frame due to too large NACK list."; return false; } if (MissingTooOldPacket(sequence_number) && !HandleTooOldPackets(sequence_number)) { RTC_LOG(LS_WARNING) << "Requesting key frame due to missing too old packets"; return false; } } else { missing_sequence_numbers_.erase(sequence_number); } return true; } |
在上面我们列举了几种需要关键帧请求的情况,我们只需要规定好RTCP报文格式,就能通知编码发送端发送关键帧。关键帧请求RTCP报文格式比较简单,在RFC4585(RTP/AVPF)以及RFC5104(AVPF)规定了两种不同的关键帧请求报文格式:Picture Loss Indication (PLI)、Full Intra Request (FIR)。WebRTC中关键帧请求也只用到了这两种消息,在modules/rtp_rtcp模块目录,rtp_rtcp工程中,相关代码如下:
1 2 3 4 5 6 7 8 9 |
int32_t ModuleRtpRtcpImpl::RequestKeyFrame() { switch (key_frame_req_method_) { case kKeyFrameReqPliRtcp: return SendRTCP(kRtcpPli); case kKeyFrameReqFirRtcp: return SendRTCP(kRtcpFir); } return -1; } |
Picture Loss Indication (PLI)
在RFC4585中定义,属于RTCP反馈消息中的一种。RTCP反馈消息数据包格式按如下规定:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
// RFC 4585: Feedback format. Common packet format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |V=2|P| FMT | PT | length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | SSRC of packet sender | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | SSRC of media source | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : Feedback Control Information (FCI) : : : |
其中PT字段按如下规定:
1 2 3 4 |
Name | Value | Brief Description ----------+-------+------------------------------------ RTPFB | 205 | Transport layer FB message PSFB | 206 | Payload-specific FB message |
对于PLI,由于只需要通知发送关键帧,无需携带其他消息,所以FCI部分为空。对于FMT规定为1,PT规定为PSFB。
在WebRTC源码中,PLI相关解析封装代码位于modules/rtp_rtcp模块目录,rtp_rtcp_format工程下。相关代码如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
// Picture loss indication (PLI) (RFC 4585). // FCI: no feedback control information. bool Pli::Parse(const CommonHeader& packet) { RTC_DCHECK_EQ(packet.type(), kPacketType); RTC_DCHECK_EQ(packet.fmt(), kFeedbackMessageType); if (packet.payload_size_bytes() < kCommonFeedbackLength) { RTC_LOG(LS_WARNING) << "Packet is too small to be a valid PLI packet"; return false; } ParseCommonFeedback(packet.payload()); return true; } size_t Pli::BlockLength() const { return kHeaderLength + kCommonFeedbackLength; } bool Pli::Create(uint8_t* packet, size_t* index, size_t max_length, PacketReadyCallback callback) const { while (*index + BlockLength() > max_length) { if (!OnBufferFull(packet, index, callback)) return false; } CreateHeader(kFeedbackMessageType, kPacketType, HeaderLength(), packet, index); CreateCommonFeedback(packet + *index); *index += kCommonFeedbackLength; return true; } |
PLI消息用于解码端通知编码端我要解码的图像的编码数据丢失了。对于基于帧间预测的视频编码类型,编码端收到PLI消息就要知道视频数据丢失了,由于帧间预测需要基于前后完整的视频帧才能解码(例如H264中,存在B帧,需要参考前后帧才能解码),前面的数据丢失了,后面的视频帧不能正常解码出图像,此时编码端可以直接生成一个关键帧,然后发送给解码端。
Full Intra Request (FIR)
在RFC5104中定义。参照上一小节RTCP反馈消息数据包格式,对于FMT规定为4,PT规定为PSFB。由于FIR可用于通知多个编码发送端(例如多点视频会议情况),所以用到了FCI部分,填充多个发送端的ssrc信息。具体包格式如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |V=2|P| FMT | PT | length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | SSRC of packet sender | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | SSRC of media source (unused) = 0 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : Feedback Control Information (FCI) : : : // Full intra request (FIR) (RFC 5104). // The Feedback Control Information (FCI) for the Full Intra Request // consists of one or more FCI entries. // FCI: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | SSRC | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Seq nr. | Reserved = 0 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
在WebRTC源码中,FIR相关解析封装代码位于modules/rtp_rtcp模块目录,rtp_rtcp_format工程下。相关处理代码就不贴出来了,类似PLI处理,除了FCI部分要填充一些信息。
当解码端需要刷新时,可以发送FIR消息给编码端,编码端此时发送关键帧,刷新解码端。这有点类似PLI消息,但是PLI消息是用于丢包情况下的通知,而FIR却不是,在有些非丢包情况下,FIR就要用到。举两个例子:
1)解码端需要切换到另一路不同视频时,由于需要新的解码参数,所以可通过发送FIR消息,通知编码端生成关键帧,获取新的解码参数,刷新视频解码器;
2)在视频会议中,新用户随机时刻加入,各个编码端发送的视频不一定都是关键帧,所以新用户不一定能正常解码。此时该新加入用户发送FIR消息,通知各个编码端给它发关键帧,获取关键帧后即可正常解码。
总结
本文主要介绍了几种关键帧请求场景,讲了AVPF中定义的两种关键帧请求消息,虽然这两种消息获取的结果一样,但是表达的意义却不一样,用于不同场景,使用时需要区分下。
文章评论