1. 程式人生 > >live555中是如何獲取SPS和PPS的

live555中是如何獲取SPS和PPS的

關於這個問題,困擾了很長時間,主要是為了獲取這2個東東,設計到的類太多了,首先的理清設計到了哪些類,其次得把這些類的依屬關係給理清,下面就讓咱們一步一步的來分析:
SPS和PPS的獲取是在伺服器接受到了Descripute命令後進行的,在看下面的內容之前請大家先看上篇文章:http://www.shouyanwang.org/thread-704-1-1.html


RtspServerMediaSubSession sdpLines:

FramedSource* inputSource = createNewStreamSource(0, estBitrate);
RTPSink* dummyRTPSink= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource);


上面的2個類基本表明了H264這種型別在解析的時候所涉及的類層次:

初步看了下FramedSource和RTPSink的類結構層次,覺得FrameSource於從檔案中提取一幀一幀的H264幀有關係,而RTPSink可能與拆包封裝為RTP/RTCP 傳給伺服器有關聯。

由此引發的2條主幹繼承關係如下:

FramedSource* inputSource = createNewStreamSource(0, estBitrate);
引發的繼承關係:
H264VideoStreamFramer --- MPEGVideoStreamFamer --- FramedFilter---FramedSource -- Media
因此後面看到inputSoure其實本質上是 H264VideoStreamFramer

RTPSink* dummyRTPSink= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource);
引發的繼承關係如下:
H264VideoRTPSink -- VideoRTPSink  --  MultiFramedRTPSink -- RTPSink --- MediaSink
因此後面看到的RTSPink其實本質上是H264VideoRTPSink.

  1. char const* H264VideoFileServerMediaSubsession::getAuxSDPLine(RTPSink* rtpSink, FramedSource* inputSource) {
  2.   if (fAuxSDPLine != NULL) return fAuxSDPLine; // it's already been set up (for a previous client)
  3.   printf("H264VideoFileServerMediaSubsession getAuxSDPLine\r\n");
  4.   if (fDummyRTPSink == NULL) { // we're not already setting it up for another, concurrent stream
  5.     // Note: For H264 video files, the 'config' information ("profile-level-id" and "sprop-parameter-sets") isn't known
  6.     // until we start reading the file.  This means that "rtpSink"s "auxSDPLine()" will be NULL initially,
  7.     // and we need to start reading data from our file until this changes.
  8.     fDummyRTPSink = rtpSink;//加了dummy就代表是引用?
  9.     // Start reading the file: 在這裡的主要目的是為了獲取SPS和PPS
  10.     fDummyRTPSink->startPlaying(*inputSource, afterPlayingDummy, this);//函式的宣告在 MediaSink 中完成,如果是H264那麼這裡的inputSource必然不為NULL
複製程式碼
fDummyRTPSink-startPlaying 實際呼叫的是MediaSink的startPlaying:
  1. Boolean MediaSink::startPlaying(MediaSource& source,
  2.                                 afterPlayingFunc* afterFunc,
  3.                                 void* afterClientData) {
  4.   printf("MediaSink startPlaying....\r\n");
  5.   // Make sure we're not already being played:
  6.   if (fSource != NULL) {//這裡是fSource不是source
  7.     printf("MediaSink is already being played\r\n");
  8.     envir().setResultMsg("This sink is already being played");
  9.     return False;
  10.   }
  11.   // Make sure our source is compatible: compatible(相容)
  12.   if (!sourceIsCompatibleWithUs(source)) {
  13.     envir().setResultMsg("MediaSink::startPlaying(): source is not compatible!");
  14.     return False;
  15.   }
  16.   fSource = (FramedSource*)&source;
  17.   fAfterFunc = afterFunc;//MediaSink中定義的函式指標,用於指向H264FileServerMediaSubSession中的函式:afterPlayingDummy
  18.   fAfterClientData = afterClientData;//實際指向H264FileMediaSubSession
  19.   return continuePlaying();//呼叫的是H264的continuePlaying函式
  20. }
複製程式碼 這裡注意了fSource已經指向了傳進來的source,這個 source就是最上面的 H264VideoStreamFramer.

continuePlaying呼叫的H264VideoRTPSink中得continuePlaying:
  1. Boolean H264VideoRTPSink::continuePlaying() {
  2.   // First, check whether we have a 'fragmenter' class set up yet.
  3.   // If not, create it now:
  4.   if (fOurFragmenter == NULL) {
  5.     printf("H264VideoRTPSink init H264FUAFragmenter\r\n");
  6. Boolean H264VideoRTPSink::continuePlaying() {
複製程式碼 在這裡需要非常的注意,因此在這裡引入了一個新的類H264FUAFramenter,這個類的繼承關係如下:

H264FUAFramenter -- FramedFilter -- FrameSource -- MediaSourec

fOurFragmenter = new H264FUAFragmenter(envir(), fSource, OutPacketBuffer::maxSize,//100K

   ourMaxPacketSize() - 12/*RTP hdr size*/);

fSource = fOurFragmenter;//fOurFragmenter 主要用於是RTP的拆包實現


這裡格外注意這2行程式碼,fSource首先參與到fOurFragmenter的初始化,然後改變指向,指向了剛剛建立的fOurFragmenter.

在fOurFragmenter初始化的過程中,其引數 FramedSource* fInputSource;指向了傳進來的source,也就是 H264FUAFramenter

為什麼在這裡重點強調,後面要用到的.


conitnuePlay呼叫的核心是buildAndSendPacket,,buildAndPackFrame主要涉及到了RFC3984中得RTP封包部分,其呼叫的核心是:
packFrame();


  1. void MultiFramedRTPSink::packFrame() {
  2.   // Get the next frame.
  3.   // First, see if we have an overflow frame that was too big for the last pkt
  4.   if (fOutBuf->haveOverflowData()) {
  5.           printf("MultiFramedRTPSink packFrame Over flow data---\r\n");
  6.     // Use this frame before reading a new one from the source
  7.     unsigned frameSize = fOutBuf->overflowDataSize(); //? fOutBuf的初始化在哪裡完成?
  8.     struct timeval presentationTime = fOutBuf->overflowPresentationTime();
  9.     unsigned durationInMicroseconds = fOutBuf->overflowDurationInMicroseconds();
  10.     fOutBuf->useOverflowData();
  11.     afterGettingFrame1(frameSize, 0, presentationTime, durationInMicroseconds);
  12.   } else {
  13.     printf("MultiFrameRTPSink packFrame read a new frame from the source--\r\n");
  14.     // Normal case: we need to read a new frame from the source
  15.     if (fSource == NULL) return;
  16.     fCurFrameSpecificHeaderPosition = fOutBuf->curPacketSize();
  17.     fCurFrameSpecificHeaderSize = frameSpecificHeaderSize();
  18.     fOutBuf->skipBytes(fCurFrameSpecificHeaderSize);
  19.     fTotalFrameSpecificHeaderSizes += fCurFrameSpecificHeaderSize;
  20. //        printf("MultiFrameRTPSink packFrame curptr:%d,totalBytesAvailable:%d--\r\n",fOutBuf->curPtr(), fOutBuf->totalBytesAvailable());
  21.     fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),
  22.                           afterGettingFrame, this, ourHandleClosure, this);//似乎所有的核心都集中到了這裡?-----
  23.   }
  24. }
複製程式碼
這個時候fSource-getNextFrame,fSource其實呼叫的是FrameSource的getNextFrame,因為H264FuaFragmenter並沒有實現此方法:
  1. void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
  2.                                 afterGettingFunc* afterGettingFunc,
  3.                                 void* afterGettingClientData,
  4.                                 onCloseFunc* onCloseFunc,
  5.                                 void* onCloseClientData) {
  6.   printf("FrameSource getNextFrame ...\r\n");
  7.   // Make sure we're not already being read:
  8.   if (fIsCurrentlyAwaitingData) {
  9.     envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
  10.     envir().internalError();
  11.   }
  12.   fTo = to;
  13.   fMaxSize = maxSize;
  14.   fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
  15.   fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
  16.   fAfterGettingFunc = afterGettingFunc;
  17.   fAfterGettingClientData = afterGettingClientData;
  18.   fOnCloseFunc = onCloseFunc;
  19.   fOnCloseClientData = onCloseClientData;
  20.   fIsCurrentlyAwaitingData = True;
  21.   //實際呼叫的是: H264FUAFragmenter::doGetNextFrame()
  22.   doGetNextFrame();//
  23. }
複製程式碼 在上面完成了對FrameSource的成員函式的初始化,主要是函式指標和void*指標,void*指向H264VideoRTPSink,函式指標指向H264VideoRTPSink中得函式.

H264FUAFramenter中得doGetNextFrame
  1. void H264FUAFragmenter::doGetNextFrame() {
  2.   if (fNumValidDataBytes == 1) {
  3.           //正常情況下應該執行的是這裡
  4.         printf("H264FUAFragmenter doGetNextFrame validDataBytes..\r\n");
  5.     // We have no NAL unit data currently in the buffer.  Read a new one:
  6.         //這裡的fInputSource實際指的是:H264VideoStreamFramer
  7.     fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,
  8.                                afterGettingFrame, this,
  9.                                FramedSource::handleClosure, this);
  10.   } else {
  11.     //對NAL單元進行拆分或者合併傳送
複製程式碼
看到上面的程式碼沒,核心是fInputSource->getNextFrame,fInputSource是 H264VideoFramer型別的,但是H264VideoFramer並沒有覆蓋此方法,因此最終的執行是在FrameSource的getNextFrame中:

  1. void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
  2.                                 afterGettingFunc* afterGettingFunc,
  3.                                 void* afterGettingClientData,
  4.                                 onCloseFunc* onCloseFunc,
  5.                                 void* onCloseClientData) {
  6.   printf("FrameSource getNextFrame ...\r\n");
  7.   // Make sure we're not already being read:
  8.   if (fIsCurrentlyAwaitingData) {
  9.     envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
  10.     envir().internalError();
  11.   }
  12.   fTo = to;
  13.   fMaxSize = maxSize;
  14.   fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
  15.   fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
  16.   fAfterGettingFunc = afterGettingFunc;
  17.   fAfterGettingClientData = afterGettingClientData;
  18.   fOnCloseFunc = onCloseFunc;
  19.   fOnCloseClientData = onCloseClientData;
  20.   fIsCurrentlyAwaitingData = True;
  21.   //實際呼叫的是: H264FUAFragmenter::doGetNextFrame() ---(這裡理解錯誤 ,因為這裡的FrameSource並不是H264FUAFragmenter物件)
  22.   doGetNextFrame();// ----這裡實際呼叫的是那個物件的呢?
  23. }
複製程式碼
從H264VideoStreamFrame的繼承關係可以推敲出,doGetNextFrame其實呼叫的是其父類MPEGVideoStreamFramer的doGetNextFrame方法.
  1. void MPEGVideoStreamFramer::doGetNextFrame() {
  2.   printf("MPEGVideoStreamFrame doGetNextFrame ....\r\n");
  3.   fParser->registerReadInterest(fTo, fMaxSize);
  4.   continueReadProcessing();
  5. }
複製程式碼

剩下的明天接著分析,就這麼點分析就花了將近1個半鐘頭........類了,休息下。。。