<menu id="w8yyk"><menu id="w8yyk"></menu></menu>
  • <dd id="w8yyk"><nav id="w8yyk"></nav></dd>
    <menu id="w8yyk"></menu>
    <menu id="w8yyk"><code id="w8yyk"></code></menu>
    <menu id="w8yyk"></menu>
    <xmp id="w8yyk">
    <xmp id="w8yyk"><nav id="w8yyk"></nav>
  • 網站首頁 > 物聯資訊 > 技術分享

    庖丁解牛-----Live555源碼徹底解密(RTP打包)

    2016-09-28 00:00:00 廣州睿豐德信息科技有限公司 閱讀
    睿豐德科技 專注RFID識別技術和條碼識別技術與管理軟件的集成項目。質量追溯系統、MES系統、金蝶與條碼系統對接、用友與條碼系統對接

    本文主要講解live555的服務端RTP打包流程,根據MediaServer講解RTP的打包流程,所以大家看這篇文章時,先看看下面這個鏈接的內容;

    庖丁解牛-----Live555源碼徹底解密(根據MediaServer講解Rtsp的建立過程)

    http://blog.csdn.net/smilestone_322/article/details/18923139

    在收到客戶端的Play命令后,調用StartStream函數啟動流

    void OnDemandServerMediaSubsession::startStream(unsigned clientSessionId,

                                void* streamToken,

                                TaskFunc* rtcpRRHandler,

                                void* rtcpRRHandlerClientData,

                                unsignedshort& rtpSeqNum,

                                unsigned& rtpTimestamp,

                                ServerRequestAlternativeByteHandler* serverRequestAlternativeByteHandler,

                                void* serverRequestAlternativeByteHandlerClientData) {

      StreamState* streamState = (StreamState*)streamToken;

      Destinations* destinations

        = (Destinations*)(fDestinationsHashTable->Lookup((charconst*)clientSessionId));

      if (streamState != NULL) {

        //啟動流

        streamState->startPlaying(destinations,

                        rtcpRRHandler, rtcpRRHandlerClientData,

                        serverRequestAlternativeByteHandler, serverRequestAlternativeByteHandlerClientData);

        RTPSink* rtpSink = streamState->rtpSink(); // alias

    if (rtpSink != NULL) {

     //獲取序列號與時間戳

          rtpSeqNum = rtpSink->currentSeqNo();

          rtpTimestamp = rtpSink->presetNextTimestamp();

        }

      }

    }

     

    接著跟蹤streamState類中的startPlaying函數;源碼如下:

    void StreamState

    ::startPlaying(Destinations* dests,

                TaskFunc* rtcpRRHandler, void* rtcpRRHandlerClientData,

                ServerRequestAlternativeByteHandler* serverRequestAlternativeByteHandler,

                void* serverRequestAlternativeByteHandlerClientData) {

      if (dests == NULL) return;

     

      if (fRTCPInstance == NULL && fRTPSink != NULL) {

    // Create (and start) a 'RTCP instance' for this RTP sink:

    //用來發送RTCP數據包

        fRTCPInstance

          = RTCPInstance::createNew(fRTPSink->envir(), fRTCPgs,

                       fTotalBW, (unsignedchar*)fMaster.fCNAME,

                       fRTPSink, NULL /* we're a server */);

            // Note: This starts RTCP running automatically

      }

     

      if (dests->isTCP) {

    // Change RTP and RTCP to use the TCP socket instead of UDP:

    //使用TCP Socket代替UDP socket,使用什么socket由客戶端確定,客戶端在Setup時,將socket的連接方式告訴服務端;

     

        if (fRTPSink != NULL) {

          fRTPSink->addStreamSocket(dests->tcpSocketNum, dests->rtpChannelId);

          RTPInterface

         ::setServerRequestAlternativeByteHandler(fRTPSink->envir(), dests->tcpSocketNum,

                                 serverRequestAlternativeByteHandler, serverRequestAlternativeByteHandlerClientData);

            // So that we continue to handle RTSP commands from the client

        }

        if (fRTCPInstance != NULL) {

          fRTCPInstance->addStreamSocket(dests->tcpSocketNum, dests->rtcpChannelId);

          fRTCPInstance->setSpecificRRHandler(dests->tcpSocketNum, dests->rtcpChannelId,

                             rtcpRRHandler, rtcpRRHandlerClientData);

        }

      } else {

        // Tell the RTP and RTCP 'groupsocks' about this destination

        // (in case they don't already have it):

        if (fRTPgs != NULL) fRTPgs->addDestination(dests->addr, dests->rtpPort);

        if (fRTCPgs != NULL) fRTCPgs->addDestination(dests->addr, dests->rtcpPort);

        if (fRTCPInstance != NULL) {

          fRTCPInstance->setSpecificRRHandler(dests->addr.s_addr, dests->rtcpPort,

                             rtcpRRHandler, rtcpRRHandlerClientData);

        }

      }

     

      if (fRTCPInstance != NULL) {

        // Hack: Send an initial RTCP "SR" packet, before the initial RTP packet, so that receivers will (likely) be able to

        // get RTCP-synchronized presentation times immediately:

        fRTCPInstance->sendReport();

      }

     

      if (!fAreCurrentlyPlaying && fMediaSource != NULL) {

    if (fRTPSink != NULL) {

     //啟動流

          fRTPSink->startPlaying(*fMediaSource, afterPlayingStreamState,this);

          fAreCurrentlyPlaying = True;

        } else if (fUDPSink != NULL) {

          fUDPSink->startPlaying(*fMediaSource, afterPlayingStreamState,this);

          fAreCurrentlyPlaying = True;

        }

      }

    }

     

    下面主要分析:

    fRTPSink->startPlaying(*fMediaSource, afterPlayingStreamState, this);

    代碼;RTPSink* fRTPSink;RTPSink繼承自MediaSink,所以fRTPSink調用的是MediaSink中的startPlaying函數;跟蹤進入到startPlaying函數;

        Boolean MediaSink::startPlaying(MediaSource& source,

                       afterPlayingFunc* afterFunc,

                       void* afterClientData) {

      // Make sure we're not already being played:

      if (fSource != NULL) {

        envir().setResultMsg("This sink is already being played");

        return False;

      }

     

      // Make sure our source is compatible:

      if (!sourceIsCompatibleWithUs(source)) {

        envir().setResultMsg("MediaSink::startPlaying(): source is not compatible!");

        return False;

      }

     

      //保存下一些變量

      fSource = (FramedSource*)&source;

      fAfterFunc = afterFunc;

      fAfterClientData = afterClientData;

      return continuePlaying();

    }

     

    這個函數的內容對于客戶端和服務端來說,都差不多,就是Sink跟source要數據,對服務器來說,source就是讀文件或讀實時流,將數據數據傳遞到sink,sink負責打包發送,對于客戶端來說,source就是從網絡上接收數據包,組成幀,而sink就是數據的解碼等內容;下面接著跟進到continuePlaying();

      virtual Boolean continuePlaying() = 0;函數在MediaSink類中定義的是一個純虛函數,實現就是在它的子類里面實現了。跟進代碼,看在哪個子類中實現該函數;

     

    Boolean MultiFramedRTPSink::continuePlaying() {

      // Send the first packet.

      // (This will also schedule any future sends.)

      buildAndSendPacket(True);

      return True;

    }

     

    在MultiFrameRTPSink中找到continuePlaying()函數,該函數很簡單,就是調用buildAndSendPacket(True);函數;MultiFrameRTPSink是一個與幀有關的類,它每次從source中獲得一幀數據,buildAndSendPacket函數,顧名思義就是打包和發送的函數了。

     

    void MultiFramedRTPSink::buildAndSendPacket(Boolean isFirstPacket) {

      fIsFirstPacket = isFirstPacket;

     

      // Set up the RTP header:

      //填充RTP包頭

      unsigned rtpHdr = 0x80000000; // RTP version 2; marker ('M') bit not set (by default; it can be set later)

      rtpHdr |= (fRTPPayloadType<<16); //負載類型

      rtpHdr |= fSeqNo; // sequence number //序列號

      //往包buff中加入rtpHdr

      fOutBuf->enqueueWord(rtpHdr);

     

      // Note where the RTP timestamp will go.

      // (We can't fill this in until we start packing payload frames.)

      fTimestampPosition = fOutBuf->curPacketSize();

     

      //緩沖區中空出一個時間戳的位置,時間戳在以后在填充

      fOutBuf->skipBytes(4); // leave a hole for the timestamp

     

      //緩沖區中填入SSRC內容;

      fOutBuf->enqueueWord(SSRC());

     

      // Allow for a special, payload-format-specific header following the

      // RTP header:

      fSpecialHeaderPosition = fOutBuf->curPacketSize();

      fSpecialHeaderSize = specialHeaderSize();

      fOutBuf->skipBytes(fSpecialHeaderSize);

     

      // Begin packing as many (complete) frames into the packet as we can:

      fTotalFrameSpecificHeaderSizes = 0;

      fNoFramesLeft = False;

      fNumFramesUsedSoFar = 0;

      //前面的內容都是填充RTP包頭,packFrame就是打包數據了

      packFrame();

    }

     

    PackFrame函數源碼如下:

    void MultiFramedRTPSink::packFrame() {

      // Get the next frame.

     

      // First, see if we have an overflow frame that was too big for the last pkt

      if (fOutBuf->haveOverflowData()) {

         //上一幀的數據太大,溢出了

        // Use this frame before reading a new one from the source

        unsigned frameSize = fOutBuf->overflowDataSize();

        struct timeval presentationTime = fOutBuf->overflowPresentationTime();

        unsigned durationInMicroseconds = fOutBuf->overflowDurationInMicroseconds();

        fOutBuf->useOverflowData();

     

        afterGettingFrame1(frameSize, 0, presentationTime, durationInMicroseconds);

      } else {

        // Normal case: we need to read a new frame from the source

        if (fSource == NULL) return;

        //更新緩沖區的位置信息

        fCurFrameSpecificHeaderPosition = fOutBuf->curPacketSize();

        fCurFrameSpecificHeaderSize = frameSpecificHeaderSize();

        fOutBuf->skipBytes(fCurFrameSpecificHeaderSize);

        fTotalFrameSpecificHeaderSizes += fCurFrameSpecificHeaderSize;

       

    //再次從source要數據, fOutBuf->curPtr()表示數據存放的起始Buff地址;第2個參數表示Buff可用緩沖區的size,afterGettingFrame為收到一幀數據的回調函數,對該幀數據進行處理;ourHandleClosure在關閉文件時調用該函數;

        fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),

                    afterGettingFrame, this, ourHandleClosure,this);

      }

    }

    GetNextFrame函數就是Source讀文件或某個設備(比如IP Camera)中讀取一幀數據,讀完后返回給Sink,然后調用afterGettingFrame函數;

     

    下面接著講解getNextFrame函數;

    void FramedSource::getNextFrame(unsignedchar* to,unsigned maxSize,

                       afterGettingFunc* afterGettingFunc,

                       void* afterGettingClientData,

                       onCloseFunc* onCloseFunc,

                       void* onCloseClientData) {

      // Make sure we're not already being read:

      if (fIsCurrentlyAwaitingData) {

        envir() << "FramedSource[" <<this <<"]::getNextFrame(): attempting to read more than once at the same time!\n";

        envir().internalError();

      }

     

    //保存一些變量

      fTo = to;

      fMaxSize = maxSize;

      fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()

      fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()

      fAfterGettingFunc = afterGettingFunc;

      fAfterGettingClientData = afterGettingClientData;

      fOnCloseFunc = onCloseFunc;

      fOnCloseClientData = onCloseClientData;

      fIsCurrentlyAwaitingData = True;

     

      doGetNextFrame();

    }

     

    調用doGetNextFrame()函數取下一幀數據;

    H264FUAFragmenter類是H264VideoRTPSink的中調用,為H264VideoRTPSink的一個成員變量,H264VideoRTPSink繼承自VideoRTPSink,而VideoRTPSink又繼承自MultiFramedRTPSink;MultiFramedRTPSink繼承自MediaSink;H264FUAFragmenter類取代了H264VideoStreamFramer成為和RTPSink的source,RTPSink要獲取數據幀時,從H264FUAFragmenter獲取。

     

    void H264FUAFragmenter::doGetNextFrame() {

      if (fNumValidDataBytes == 1) {

    // We have no NAL unit data currently in the buffer. Read a new one:

    //buff中沒有數據,則調用fInputSource->getNextFrame函數從source中獲取數據;

    //fInputSource為H264VideoStreamFramer,H264VideoStreamFramer的getNextFrame()會調用H264VideoStreamParser的parser(),parser()又調用ByteStreamFileSource獲取數據;

        fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,

                         afterGettingFrame, this,                       

                         FramedSource::handleClosure, this);

      } else {

        // We have NAL unit data in the buffer. There are three cases to consider:

        // 1. There is a new NAL unit in the buffer, and it's small enough to deliver

        //    to the RTP sink (as is).

        // 2. There is a new NAL unit in the buffer, but it's too large to deliver to

        //    the RTP sink in its entirety.  Deliver the first fragment of this data,

        //    as a FU-A packet, with one extra preceding header byte.

        // 3. There is a NAL unit in the buffer, and we've already delivered some

        //    fragment(s) of this.  Deliver the next fragment of this data,

        //    as a FU-A packet, with two extra preceding header bytes.

     

        if (fMaxSize < fMaxOutputPacketSize) {// shouldn't happen

          envir() << "H264FUAFragmenter::doGetNextFrame(): fMaxSize ("

               << fMaxSize << ") is smaller than expected\n";

        } else {

          fMaxSize = fMaxOutputPacketSize;

        }

     

    fLastFragmentCompletedNALUnit = True; // by default

    //1)非分片包

        if (fCurDataOffset == 1) { // case 1 or 2

          if (fNumValidDataBytes - 1 <= fMaxSize) {// case 1

         memmove(fTo, &fInputBuffer[1], fNumValidDataBytes - 1);

         fFrameSize = fNumValidDataBytes - 1;

         fCurDataOffset = fNumValidDataBytes;

          } else { // case 2

         // We need to send the NAL unit data as FU-A packets. Deliver the first

         // packet now.  Note that we add FU indicator and FU header bytes to the front

         // of the packet (reusing the existing NAL header byte for the FU header).

        //2)為FU-A的第一個包

         fInputBuffer[0] = (fInputBuffer[1] & 0xE0) | 28; // FU indicator

         fInputBuffer[1] = 0x80 | (fInputBuffer[1] & 0x1F); // FU header (with S bit)

         memmove(fTo, fInputBuffer, fMaxSize);

         fFrameSize = fMaxSize;

         fCurDataOffset += fMaxSize - 1;

         fLastFragmentCompletedNALUnit = False;

          }

        } else { // case 3

          // We are sending this NAL unit data as FU-A packets. We've already sent the

          // first packet (fragment).  Now, send the next fragment.  Note that we add

          // FU indicator and FU header bytes to the front. (We reuse these bytes that

          // we already sent for the first fragment, but clear the S bit, and add the E

          //3) bit if this is the last fragment.)

          //為FU-A的中間的包,復用FU indicator and FU header,清除掉FU header (no S bit開始標記)

          fInputBuffer[fCurDataOffset-2] = fInputBuffer[0]; // FU indicator

          fInputBuffer[fCurDataOffset-1] = fInputBuffer[1]&~0x80; // FU header (no S bit)

          unsigned numBytesToSend = 2 + fNumValidDataBytes - fCurDataOffset;

          if (numBytesToSend > fMaxSize) {

         // We can't send all of the remaining data this time:

         numBytesToSend = fMaxSize;

         fLastFragmentCompletedNALUnit = False;

          } else {

         // This is the last fragment:

         //4)這是FU(分片包28)的最后一個包了,將FU頭部的設置成E表示End,方便客戶端組幀

         fInputBuffer[fCurDataOffset-1] |= 0x40; // set the E bit in the FU header

         fNumTruncatedBytes = fSaveNumTruncatedBytes;

          }

          memmove(fTo, &fInputBuffer[fCurDataOffset-2], numBytesToSend);

          fFrameSize = numBytesToSend;

          fCurDataOffset += numBytesToSend - 2;

        }

     

        if (fCurDataOffset >= fNumValidDataBytes) {

          // We're done with this data.  Reset the pointers for receiving new data:

          fNumValidDataBytes = fCurDataOffset = 1;

        }

     

        // Complete delivery to the client:

        FramedSource::afterGetting(this);

      }

    }

     

    該函數的else部分實現RTP數據打包工作;live555只處理2種包;單獨的包,比如sps,pps信息,一個包就是一個數據幀,2)包很大,拆包,采用FU-A的方法拆包,參考RTP打包協議!

     http://blog.csdn.net/smilestone_322/article/details/7574253

    //fInputSource->getNextFrame()后調用回調函數:

    void H264FUAFragmenter::afterGettingFrame(void* clientData,unsigned frameSize,

                             unsigned numTruncatedBytes,

                             struct timeval presentationTime,

                             unsigned durationInMicroseconds) {

      H264FUAFragmenter* fragmenter = (H264FUAFragmenter*)clientData;

      fragmenter->afterGettingFrame1(frameSize, numTruncatedBytes, presentationTime,

                        durationInMicroseconds);

    }

     

    void H264FUAFragmenter::afterGettingFrame1(unsigned frameSize,

                              unsigned numTruncatedBytes,

                              struct timeval presentationTime,

                              unsigned durationInMicroseconds) {

      fNumValidDataBytes += frameSize;

      fSaveNumTruncatedBytes = numTruncatedBytes;

      fPresentationTime = presentationTime;

      fDurationInMicroseconds = durationInMicroseconds;

     

      // Deliver data to the client:

      doGetNextFrame();

    }

     

    doGetNextFrame();獲取到一幀數據后,就打包將數據發送給客戶端;調用H264FUAFragmenter的doGetNextFrame()函數,對數據進行分析處理;這時走的doGetNextFrame()的else部分;

     

    afterGettingFrame函數的源碼如下:

    void MultiFramedRTPSink

    ::afterGettingFrame(void* clientData,unsigned numBytesRead,

                 unsigned numTruncatedBytes,

                 struct timeval presentationTime,

                 unsigned durationInMicroseconds) {

      MultiFramedRTPSink* sink = (MultiFramedRTPSink*)clientData;

      sink->afterGettingFrame1(numBytesRead, numTruncatedBytes,

                     presentationTime, durationInMicroseconds);

    }

     

     

    afterGettingFrame又調用afterGettingFrame1來消費數據,afterGettingFrame1我猜是發送數據;

    看源碼;

    void MultiFramedRTPSink

    ::afterGettingFrame1(unsigned frameSize,unsigned numTruncatedBytes,

                  struct timeval presentationTime,

                  unsigned durationInMicroseconds) {

      if (fIsFirstPacket) {

        // Record the fact that we're starting to play now:

        gettimeofday(&fNextSendTime, NULL);

      }

     

      fMostRecentPresentationTime = presentationTime;

      if (fInitialPresentationTime.tv_sec == 0 && fInitialPresentationTime.tv_usec == 0) {

        fInitialPresentationTime = presentationTime;

      }   

     

      if (numTruncatedBytes > 0) {

        unsigned const bufferSize = fOutBuf->totalBytesAvailable();

        envir() << "MultiFramedRTPSink::afterGettingFrame1(): The input frame data was too large for our buffer size ("

             << bufferSize << ").  "

             << numTruncatedBytes << " bytes of trailing data was dropped! Correct this by increasing \"OutPacketBuffer::maxSize\" to at least "

             << OutPacketBuffer::maxSize + numTruncatedBytes << ", *before* creating this 'RTPSink'.  (Current value is "

             << OutPacketBuffer::maxSize << ".)\n";

      }

      unsigned curFragmentationOffset = fCurFragmentationOffset;

      unsigned numFrameBytesToUse = frameSize;

      unsigned overflowBytes = 0;

     

      // If we have already packed one or more frames into this packet,

      // check whether this new frame is eligible to be packed after them.

      // (This is independent of whether the packet has enough room for this

      // new frame; that check comes later.)

      if (fNumFramesUsedSoFar > 0) {

        if ((fPreviousFrameEndedFragmentation

          && !allowOtherFramesAfterLastFragment())

         || !frameCanAppearAfterPacketStart(fOutBuf->curPtr(), frameSize)) {

          // Save away this frame for next time:

          numFrameBytesToUse = 0;

          fOutBuf->setOverflowData(fOutBuf->curPacketSize(), frameSize,

                         presentationTime, durationInMicroseconds);

        }

      }

      fPreviousFrameEndedFragmentation = False;

     

    //緩沖區太小了,數據幀被截斷了,提示用戶增加緩沖區大小

      if (numFrameBytesToUse > 0) {

        // Check whether this frame overflows the packet

        if (fOutBuf->wouldOverflow(frameSize)) {

          // Don't use this frame now; instead, save it as overflow data, and

          // send it in the next packet instead. However, if the frame is too

          // big to fit in a packet by itself, then we need to fragment it (and

          // use some of it in this packet, if the payload format permits this.)

          if (isTooBigForAPacket(frameSize)

              && (fNumFramesUsedSoFar == 0 || allowFragmentationAfterStart())) {

            // We need to fragment this frame, and use some of it now:

            overflowBytes = computeOverflowForNewFrame(frameSize);

            numFrameBytesToUse -= overflowBytes;

            fCurFragmentationOffset += numFrameBytesToUse;

          } else {

            // We don't use any of this frame now:

            overflowBytes = frameSize;

            numFrameBytesToUse = 0;

          }

          fOutBuf->setOverflowData(fOutBuf->curPacketSize() + numFrameBytesToUse,

                         overflowBytes, presentationTime, durationInMicroseconds);

        } else if (fCurFragmentationOffset > 0) {

          // This is the last fragment of a frame that was fragmented over

          // more than one packet.  Do any special handling for this case:

          fCurFragmentationOffset = 0;

          fPreviousFrameEndedFragmentation = True;

        }

      }

     

      if (numFrameBytesToUse == 0 && frameSize > 0) {

    // Send our packet now, because we have filled it up:

    //發送數據包

        sendPacketIfNecessary();

      } else {

        // Use this frame in our outgoing packet:

        unsigned char* frameStart = fOutBuf->curPtr();

        fOutBuf->increment(numFrameBytesToUse);

            // do this now, in case "doSpecialFrameHandling()" calls "setFramePadding()" to append padding bytes

     

        // Here's where any payload format specific processing gets done:

        doSpecialFrameHandling(curFragmentationOffset, frameStart,

                     numFrameBytesToUse, presentationTime,

                     overflowBytes);

     

        ++fNumFramesUsedSoFar;

     

        // Update the time at which the next packet should be sent, based

        // on the duration of the frame that we just packed into it.

        // However, if this frame has overflow data remaining, then don't

    // count its duration yet.

    //更新時間戳

        if (overflowBytes == 0) {

          fNextSendTime.tv_usec += durationInMicroseconds;

          fNextSendTime.tv_sec += fNextSendTime.tv_usec/1000000;

          fNextSendTime.tv_usec %= 1000000;

        }

     

        // Send our packet now if (i) it's already at our preferred size, or

        // (ii) (heuristic) another frame of the same size as the one we just

        //      read would overflow the packet, or

        // (iii) it contains the last fragment of a fragmented frame, and we

        //      don't allow anything else to follow this or

    // (iv) one frame per packet is allowed:

    //1)數據包的size已經是一個恰當的大小了,在往里面打包數據可能造成緩沖區溢出了;

    //2)已經包括了分片包的最后一個包了;

    //3)容許一幀一個數據包

        if (fOutBuf->isPreferredSize()

            || fOutBuf->wouldOverflow(numFrameBytesToUse)

            || (fPreviousFrameEndedFragmentation &&

                !allowOtherFramesAfterLastFragment())

            || !frameCanAppearAfterPacketStart(fOutBuf->curPtr() - frameSize,

                              frameSize) ) {

          // The packet is ready to be sent now

          //發送數據包

          sendPacketIfNecessary();

        } else {

          // There's room for more frames; try getting another:

         //繼承打包

          packFrame();

        }

      }

    }

     

    下面繼續看發送數據的函數:

    void MultiFramedRTPSink::sendPacketIfNecessary() {

      if (fNumFramesUsedSoFar > 0) {

        // Send the packet:

    #ifdef TEST_LOSS

        if ((our_random()%10) != 0) // simulate 10% packet loss #####

    #endif

          if (!fRTPInterface.sendPacket(fOutBuf->packet(), fOutBuf->curPacketSize())) {

         // if failure handler has been specified, call it

         if (fOnSendErrorFunc != NULL) (*fOnSendErrorFunc)(fOnSendErrorData);

          }

        ++fPacketCount;

        fTotalOctetCount += fOutBuf->curPacketSize();

        fOctetCount += fOutBuf->curPacketSize()

          - rtpHeaderSize - fSpecialHeaderSize - fTotalFrameSpecificHeaderSizes;

     

        ++fSeqNo; // for next time

      }

     

      if (fOutBuf->haveOverflowData()

          && fOutBuf->totalBytesAvailable() > fOutBuf->totalBufferSize()/2) {

        // Efficiency hack: Reset the packet start pointer to just in front of

        // the overflow data (allowing for the RTP header and special headers),

        // so that we probably don't have to "memmove()" the overflow data

        // into place when building the next packet:

        unsigned newPacketStart = fOutBuf->curPacketSize()

          - (rtpHeaderSize + fSpecialHeaderSize + frameSpecificHeaderSize());

        fOutBuf->adjustPacketStart(newPacketStart);

      } else {

        // Normal case: Reset the packet start pointer back to the start:

        fOutBuf->resetPacketStart();

      }

      fOutBuf->resetOffset();

      fNumFramesUsedSoFar = 0;

     

      if (fNoFramesLeft) {

        // We're done:

        onSourceClosure(this);

      } else {

        // We have more frames left to send. Figure out when the next frame

        // is due to start playing, then make sure that we wait this long before

        // sending the next packet.

        struct timeval timeNow;

        gettimeofday(&timeNow, NULL);

        int secsDiff = fNextSendTime.tv_sec - timeNow.tv_sec;

        int64_t uSecondsToGo = secsDiff*1000000 + (fNextSendTime.tv_usec - timeNow.tv_usec);

        if (uSecondsToGo < 0 || secsDiff < 0) {// sanity check: Make sure that the time-to-delay is non-negative:

          uSecondsToGo = 0;

        }

     

        // Delay this amount of time:

        nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecondsToGo, (TaskFunc*)sendNext,this);

      }

    }

     

    在發送數據的函數中使用延遲任務,為了延遲包的發送,使用delay task來執行下次打包發送任務,看sendNext的代碼;

    void MultiFramedRTPSink::sendNext(void* firstArg) {

      MultiFramedRTPSink* sink = (MultiFramedRTPSink*)firstArg;

      sink->buildAndSendPacket(False);

    }

    它又調用了buildAndSendPacket函數,看下該函數參數的作用,True和False的區別;True表示該幀是第一幀,記下實際Play的時間;在afterGettingFrame1中有如下代碼:

      if (fIsFirstPacket) {

        // Record the fact that we're starting to play now:

        gettimeofday(&fNextSendTime, NULL);

      }

     

    在MultiFramedRTPSink中數據包和幀的緩沖區隊列是同一個,使用了一些標記和對指針的移動來操作數據打包和發送數據幀;注意:如果數據幀溢出,時間戳會計算不準確;

     

    from:http://blog.csdn.net/smilestone_322/article/details/18923711

    RFID管理系統集成商 RFID中間件 條碼系統中間層 物聯網軟件集成
    最近免费观看高清韩国日本大全