1. 程式人生 > >FFmpeg原始碼簡單分析:av_write_frame()

FFmpeg原始碼簡單分析:av_write_frame()

=====================================================

FFmpeg的庫函式原始碼分析文章列表:

【架構圖】

【通用】

【解碼】

【編碼】

【其它】

【指令碼】

【H.264】

=====================================================


打算寫兩篇文章簡單分析FFmpeg的寫檔案用到的3個函式avformat_write_header(),av_write_frame()以及av_write_trailer()。上篇文章已經分析了avformat_write_header(),這篇文章繼續分析av_write_frame()。


av_write_frame()用於輸出一幀視音訊資料,它的宣告位於libavformat\avformat.h,如下所示。
/**
 * Write a packet to an output media file.
 *
 * This function passes the packet directly to the muxer, without any buffering
 * or reordering. The caller is responsible for correctly interleaving the
 * packets if the format requires it. Callers that want libavformat to handle
 * the interleaving should call av_interleaved_write_frame() instead of this
 * function.
 *
 * @param s media file handle
 * @param pkt The packet containing the data to be written. Note that unlike
 *            av_interleaved_write_frame(), this function does not take
 *            ownership of the packet passed to it (though some muxers may make
 *            an internal reference to the input packet).
 *            <br>
 *            This parameter can be NULL (at any time, not just at the end), in
 *            order to immediately flush data buffered within the muxer, for
 *            muxers that buffer up data internally before writing it to the
 *            output.
 *            <br>
 *            Packet's @ref AVPacket.stream_index "stream_index" field must be
 *            set to the index of the corresponding stream in @ref
 *            AVFormatContext.streams "s->streams". It is very strongly
 *            recommended that timing information (@ref AVPacket.pts "pts", @ref
 *            AVPacket.dts "dts", @ref AVPacket.duration "duration") is set to
 *            correct values.
 * @return < 0 on error, = 0 if OK, 1 if flushed and there is no more data to flush
 *
 * @see av_interleaved_write_frame()
 */
int av_write_frame(AVFormatContext *s, AVPacket *pkt);

簡單解釋一下它的引數的含義:
s:用於輸出的AVFormatContext。
pkt:等待輸出的AVPacket。
函式正常執行後返回值等於0。

這個函式最典型的例子可以參考:

最簡單的基於FFMPEG的視訊編碼器(YUV編碼為H.264)

函式呼叫關係圖

av_write_frame()的呼叫關係如下圖所示。


av_write_frame()

av_write_frame()的定義位於libavformat\mux.c,如下所示。
int av_write_frame(AVFormatContext *s, AVPacket *pkt)
{
    int ret;

    ret = check_packet(s, pkt);
    if (ret < 0)
        return ret;
    //Packet為NULL,Flush Encoder
    if (!pkt) {
        if (s->oformat->flags & AVFMT_ALLOW_FLUSH) {
            ret = s->oformat->write_packet(s, NULL);
            if (s->flush_packets && s->pb && s->pb->error >= 0 && s->flags & AVFMT_FLAG_FLUSH_PACKETS)
                avio_flush(s->pb);
            if (ret >= 0 && s->pb && s->pb->error < 0)
                ret = s->pb->error;
            return ret;
        }
        return 1;
    }

    ret = compute_pkt_fields2(s, s->streams[pkt->stream_index], pkt);

    if (ret < 0 && !(s->oformat->flags & AVFMT_NOTIMESTAMPS))
        return ret;
    //寫入
    ret = write_packet(s, pkt);
    if (ret >= 0 && s->pb && s->pb->error < 0)
        ret = s->pb->error;

    if (ret >= 0)
        s->streams[pkt->stream_index]->nb_frames++;
    return ret;
}

從原始碼可以看出,av_write_frame()主要完成了以下幾步工作:
(1)呼叫check_packet()做一些簡單的檢測
(2)呼叫compute_pkt_fields2()設定AVPacket的一些屬性值
(3)呼叫write_packet()寫入資料

下面分別看一下這幾個函式功能。

check_packet()

check_packet()定義位於libavformat\mux.c,如下所示。
static int check_packet(AVFormatContext *s, AVPacket *pkt)
{
    if (!pkt)
        return 0;

    if (pkt->stream_index < 0 || pkt->stream_index >= s->nb_streams) {
        av_log(s, AV_LOG_ERROR, "Invalid packet stream index: %d\n",
               pkt->stream_index);
        return AVERROR(EINVAL);
    }

    if (s->streams[pkt->stream_index]->codec->codec_type == AVMEDIA_TYPE_ATTACHMENT) {
        av_log(s, AV_LOG_ERROR, "Received a packet for an attachment stream.\n");
        return AVERROR(EINVAL);
    }

    return 0;
}

從程式碼中可以看出,check_packet()的功能比較簡單:首先檢查一下輸入的AVPacket是否為空,如果為空,則是直接返回;然後檢查一下AVPacket的stream_index(標記了該AVPacket所屬的AVStream)設定是否正常,如果為負數或者大於AVStream的個數,則返回錯誤資訊;最後檢查AVPacket所屬的AVStream是否屬於attachment stream,這個地方沒見過,目前還沒有研究。

compute_pkt_fields2()

compute_pkt_fields2()函式的定義位於libavformat\mux.c,如下所示。
//FIXME merge with compute_pkt_fields
static int compute_pkt_fields2(AVFormatContext *s, AVStream *st, AVPacket *pkt)
{
    int delay = FFMAX(st->codec->has_b_frames, st->codec->max_b_frames > 0);
    int num, den, i;
    int frame_size;

    av_dlog(s, "compute_pkt_fields2: pts:%s dts:%s cur_dts:%s b:%d size:%d st:%d\n",
            av_ts2str(pkt->pts), av_ts2str(pkt->dts), av_ts2str(st->cur_dts), delay, pkt->size, pkt->stream_index);

    if (pkt->duration < 0 && st->codec->codec_type != AVMEDIA_TYPE_SUBTITLE) {
        av_log(s, AV_LOG_WARNING, "Packet with invalid duration %d in stream %d\n",
               pkt->duration, pkt->stream_index);
        pkt->duration = 0;
    }

    /* duration field */
    if (pkt->duration == 0) {
        ff_compute_frame_duration(s, &num, &den, st, NULL, pkt);
        if (den && num) {
            pkt->duration = av_rescale(1, num * (int64_t)st->time_base.den * st->codec->ticks_per_frame, den * (int64_t)st->time_base.num);
        }
    }

    if (pkt->pts == AV_NOPTS_VALUE && pkt->dts != AV_NOPTS_VALUE && delay == 0)
        pkt->pts = pkt->dts;

    //XXX/FIXME this is a temporary hack until all encoders output pts
    if ((pkt->pts == 0 || pkt->pts == AV_NOPTS_VALUE) && pkt->dts == AV_NOPTS_VALUE && !delay) {
        static int warned;
        if (!warned) {
            av_log(s, AV_LOG_WARNING, "Encoder did not produce proper pts, making some up.\n");
            warned = 1;
        }
        pkt->dts =
//        pkt->pts= st->cur_dts;
            pkt->pts = st->pts.val;
    }

    //calculate dts from pts
    if (pkt->pts != AV_NOPTS_VALUE && pkt->dts == AV_NOPTS_VALUE && delay <= MAX_REORDER_DELAY) {
        st->pts_buffer[0] = pkt->pts;
        for (i = 1; i < delay + 1 && st->pts_buffer[i] == AV_NOPTS_VALUE; i++)
            st->pts_buffer[i] = pkt->pts + (i - delay - 1) * pkt->duration;
        for (i = 0; i<delay && st->pts_buffer[i] > st->pts_buffer[i + 1]; i++)
            FFSWAP(int64_t, st->pts_buffer[i], st->pts_buffer[i + 1]);

        pkt->dts = st->pts_buffer[0];
    }

    if (st->cur_dts && st->cur_dts != AV_NOPTS_VALUE &&
        ((!(s->oformat->flags & AVFMT_TS_NONSTRICT) &&
          st->cur_dts >= pkt->dts) || st->cur_dts > pkt->dts)) {
        av_log(s, AV_LOG_ERROR,
               "Application provided invalid, non monotonically increasing dts to muxer in stream %d: %s >= %s\n",
               st->index, av_ts2str(st->cur_dts), av_ts2str(pkt->dts));
        return AVERROR(EINVAL);
    }
    if (pkt->dts != AV_NOPTS_VALUE && pkt->pts != AV_NOPTS_VALUE && pkt->pts < pkt->dts) {
        av_log(s, AV_LOG_ERROR,
               "pts (%s) < dts (%s) in stream %d\n",
               av_ts2str(pkt->pts), av_ts2str(pkt->dts),
               st->index);
        return AVERROR(EINVAL);
    }

    av_dlog(s, "av_write_frame: pts2:%s dts2:%s\n",
            av_ts2str(pkt->pts), av_ts2str(pkt->dts));
    st->cur_dts = pkt->dts;
    st->pts.val = pkt->dts;

    /* update pts */
    switch (st->codec->codec_type) {
    case AVMEDIA_TYPE_AUDIO:
        frame_size = (pkt->flags & AV_PKT_FLAG_UNCODED_FRAME) ?
                     ((AVFrame *)pkt->data)->nb_samples :
                     av_get_audio_frame_duration(st->codec, pkt->size);

        /* HACK/FIXME, we skip the initial 0 size packets as they are most
         * likely equal to the encoder delay, but it would be better if we
         * had the real timestamps from the encoder */
        if (frame_size >= 0 && (pkt->size || st->pts.num != st->pts.den >> 1 || st->pts.val)) {
            frac_add(&st->pts, (int64_t)st->time_base.den * frame_size);
        }
        break;
    case AVMEDIA_TYPE_VIDEO:
        frac_add(&st->pts, (int64_t)st->time_base.den * st->codec->time_base.num);
        break;
    }
    return 0;
}

從程式碼中可以看出,compute_pkt_fields2()主要有兩方面的功能:一方面用於計算AVPacket的duration, dts等資訊;另一方面用於檢查pts、dts這些引數的合理性(例如PTS是否一定大於DTS)。具體的程式碼還沒有細看,以後有時間再進行分析。

AVOutputFormat->write_packet()

write_packet()函式的定義位於libavformat\mux.c,如下所示。
/**
 * Make timestamps non negative, move side data from payload to internal struct, call muxer, and restore
 * sidedata.
 *
 * FIXME: this function should NEVER get undefined pts/dts beside when the
 * AVFMT_NOTIMESTAMPS is set.
 * Those additional safety checks should be dropped once the correct checks
 * are set in the callers.
 */
static int write_packet(AVFormatContext *s, AVPacket *pkt)
{
    int ret, did_split;

    if (s->output_ts_offset) {
        AVStream *st = s->streams[pkt->stream_index];
        int64_t offset = av_rescale_q(s->output_ts_offset, AV_TIME_BASE_Q, st->time_base);

        if (pkt->dts != AV_NOPTS_VALUE)
            pkt->dts += offset;
        if (pkt->pts != AV_NOPTS_VALUE)
            pkt->pts += offset;
    }

    if (s->avoid_negative_ts > 0) {
        AVStream *st = s->streams[pkt->stream_index];
        int64_t offset = st->mux_ts_offset;

        if (s->offset == AV_NOPTS_VALUE && pkt->dts != AV_NOPTS_VALUE &&
            (pkt->dts < 0 || s->avoid_negative_ts == AVFMT_AVOID_NEG_TS_MAKE_ZERO)) {
            s->offset = -pkt->dts;
            s->offset_timebase = st->time_base;
        }

        if (s->offset != AV_NOPTS_VALUE && !offset) {
            offset = st->mux_ts_offset =
                av_rescale_q_rnd(s->offset,
                                 s->offset_timebase,
                                 st->time_base,
                                 AV_ROUND_UP);
        }

        if (pkt->dts != AV_NOPTS_VALUE)
            pkt->dts += offset;
        if (pkt->pts != AV_NOPTS_VALUE)
            pkt->pts += offset;

        av_assert2(pkt->dts == AV_NOPTS_VALUE || pkt->dts >= 0 || s->max_interleave_delta > 0);
        if (pkt->dts != AV_NOPTS_VALUE && pkt->dts < 0) {
            av_log(s, AV_LOG_WARNING,
                   "Packets poorly interleaved, failed to avoid negative "
                   "timestamp %s in stream %d.\n"
                   "Try -max_interleave_delta 0 as a possible workaround.\n",
                   av_ts2str(pkt->dts),
                   pkt->stream_index
            );
        }
    }

    did_split = av_packet_split_side_data(pkt);
    if ((pkt->flags & AV_PKT_FLAG_UNCODED_FRAME)) {
        AVFrame *frame = (AVFrame *)pkt->data;
        av_assert0(pkt->size == UNCODED_FRAME_PACKET_SIZE);
        ret = s->oformat->write_uncoded_frame(s, pkt->stream_index, &frame, 0);
        av_frame_free(&frame);
    } else {
    	//寫入
        ret = s->oformat->write_packet(s, pkt);
    }

    if (s->flush_packets && s->pb && ret >= 0 && s->flags & AVFMT_FLAG_FLUSH_PACKETS)
        avio_flush(s->pb);

    if (did_split)
        av_packet_merge_side_data(pkt);

    return ret;
}

write_packet()函式最關鍵的地方就是呼叫了AVOutputFormat中寫入資料的方法。如果AVPacket中的flag標記中包含AV_PKT_FLAG_UNCODED_FRAME,就會呼叫AVOutputFormat的write_uncoded_frame()函式;如果不包含那個標記,就會呼叫write_packet()函式。write_packet()實際上是一個函式指標,指向特定的AVOutputFormat中的實現函式。例如,我們看一下FLV對應的AVOutputFormat,位於libavformat\flvenc.c,如下所示。
AVOutputFormat ff_flv_muxer = {
    .name           = "flv",
    .long_name      = NULL_IF_CONFIG_SMALL("FLV (Flash Video)"),
    .mime_type      = "video/x-flv",
    .extensions     = "flv",
    .priv_data_size = sizeof(FLVContext),
    .audio_codec    = CONFIG_LIBMP3LAME ? AV_CODEC_ID_MP3 : AV_CODEC_ID_ADPCM_SWF,
    .video_codec    = AV_CODEC_ID_FLV1,
    .write_header   = flv_write_header,
    .write_packet   = flv_write_packet,
    .write_trailer  = flv_write_trailer,
    .codec_tag      = (const AVCodecTag* const []) {
                          flv_video_codec_ids, flv_audio_codec_ids, 0
                      },
    .flags          = AVFMT_GLOBALHEADER | AVFMT_VARIABLE_FPS |
                      AVFMT_TS_NONSTRICT,
};

從ff_flv_muxer的定義可以看出,write_packet()指向的是flv_write_packet()函式。在看flv_write_packet()函式的定義之前,我們先回顧一下FLV封裝格式的結構。

FLV封裝格式

FLV封裝格式如下圖所示。

PS:原圖是網上找的,感覺畫的很清晰,比官方的Video File Format Specification更加通俗易懂。但是圖中有一個錯誤,就是TagHeader中的StreamID欄位的長度寫錯了(查看了一下官方標準,應該是3位元組,現在已經改過來了)。


從FLV的封裝格式結構可以看出,它的檔案資料是一個一個的Tag連線起來的,中間間隔包含著Previous Tag Size。因此,flv_write_packet()函式的任務就是寫入一個Tag和Previous Tag Size。下面簡單記錄一下Tag Data的格式。Tag Data根據Tag的Type不同而不同:可以分為音訊Tag Data,視訊Tag Data以及Script Tag Data。下面簡述一下音訊Tag Data和視訊Tag Data。

Audio Tag Data

Audio Tag在官方標準中定義如下。
 Audio Tag開始的第1個位元組包含了音訊資料的引數資訊,從第2個位元組開始為音訊流資料。
第1個位元組的前4位的數值表示了音訊資料格式:
0 = Linear PCM, platform endian
1 = ADPCM
2 = MP3
3 = Linear PCM, little endian
4 = Nellymoser 16-kHz mono
5 = Nellymoser 8-kHz mono
6 = Nellymoser
7 = G.711 A-law logarithmic PCM
8 = G.711 mu-law logarithmic PCM
9 = reserved
10 = AAC
14 = MP3 8-Khz
15 = Device-specific sound
第1個位元組的第5-6位的數值表示取樣率:0 = 5.5kHz,1 = 11KHz,2 = 22 kHz,3 = 44 kHz。
第1個位元組的第7位表示取樣精度:0 = 8bits,1 = 16bits。
第1個位元組的第8位表示音訊型別:0 = sndMono,1 = sndStereo。
其中,當音訊編碼為AAC的時候,第一個位元組後面儲存的是AACAUDIODATA,格式如下所示。
 

Video Tag Data

Video Tag在官方標準中的定義如下。
Video Tag也用開始的第1個位元組包含視訊資料的引數資訊,從第2個位元組為視訊流資料。
第1個位元組的前4位的數值表示幀型別(FrameType):
1: keyframe (for AVC, a seekableframe)(關鍵幀)
2: inter frame (for AVC, a nonseekableframe)
3: disposable inter frame (H.263only)
4: generated keyframe (reservedfor server use only)
5: video info/command frame
第1個位元組的後4位的數值表示視訊編碼ID(CodecID):
1: JPEG (currently unused)
2: Sorenson H.263
3: Screen video
4: On2 VP6
5: On2 VP6 with alpha channel
6: Screen video version 2
7: AVC
其中,當音訊編碼為AVC(H.264)的時候,第一個位元組後面儲存的是AVCVIDEOPACKET,格式如下所示。
 

flv_write_packet()

下面我們看一下FLV格式中write_packet()對應的實現函式flv_write_packet()的定義,位於libavformat\flvenc.c,如下所示。
static int flv_write_packet(AVFormatContext *s, AVPacket *pkt)
{
    AVIOContext *pb      = s->pb;
    AVCodecContext *enc  = s->streams[pkt->stream_index]->codec;
    FLVContext *flv      = s->priv_data;
    FLVStreamContext *sc = s->streams[pkt->stream_index]->priv_data;
    unsigned ts;
    int size = pkt->size;
    uint8_t *data = NULL;
    int flags = -1, flags_size, ret;

    if (enc->codec_id == AV_CODEC_ID_VP6F || enc->codec_id == AV_CODEC_ID_VP6A ||
        enc->codec_id == AV_CODEC_ID_VP6  || enc->codec_id == AV_CODEC_ID_AAC)
        flags_size = 2;
    else if (enc->codec_id == AV_CODEC_ID_H264 || enc->codec_id == AV_CODEC_ID_MPEG4)
        flags_size = 5;
    else
        flags_size = 1;

    if (flv->delay == AV_NOPTS_VALUE)
        flv->delay = -pkt->dts;

    if (pkt->dts < -flv->delay) {
        av_log(s, AV_LOG_WARNING,
               "Packets are not in the proper order with respect to DTS\n");
        return AVERROR(EINVAL);
    }

    ts = pkt->dts + flv->delay; // add delay to force positive dts

    if (s->event_flags & AVSTREAM_EVENT_FLAG_METADATA_UPDATED) {
        write_metadata(s, ts);
        s->event_flags &= ~AVSTREAM_EVENT_FLAG_METADATA_UPDATED;
    }
    //Tag Header
    switch (enc->codec_type) {
    case AVMEDIA_TYPE_VIDEO:
    	//Type
        avio_w8(pb, FLV_TAG_TYPE_VIDEO);

        flags = enc->codec_tag;
        if (flags == 0) {
            av_log(s, AV_LOG_ERROR,
                   "Video codec '%s' is not compatible with FLV\n",
                   avcodec_get_name(enc->codec_id));
            return AVERROR(EINVAL);
        }
        //Key Frame?
        flags |= pkt->flags & AV_PKT_FLAG_KEY ? FLV_FRAME_KEY : FLV_FRAME_INTER;
        break;
    case AVMEDIA_TYPE_AUDIO:
    	
        flags = get_audio_flags(s, enc);

        av_assert0(size);
        //Type
        avio_w8(pb, FLV_TAG_TYPE_AUDIO);
        break;
    case AVMEDIA_TYPE_DATA:
    	//Type
        avio_w8(pb, FLV_TAG_TYPE_META);
        break;
    default:
        return AVERROR(EINVAL);
    }

    if (enc->codec_id == AV_CODEC_ID_H264 || enc->codec_id == AV_CODEC_ID_MPEG4) {
        /* check if extradata looks like mp4 formated */
        if (enc->extradata_size > 0 && *(uint8_t*)enc->extradata != 1)
            if ((ret = ff_avc_parse_nal_units_buf(pkt->data, &data, &size)) < 0)
                return ret;
    } else if (enc->codec_id == AV_CODEC_ID_AAC && pkt->size > 2 &&
               (AV_RB16(pkt->data) & 0xfff0) == 0xfff0) {
        if (!s->streams[pkt->stream_index]->nb_frames) {
        av_log(s, AV_LOG_ERROR, "Malformed AAC bitstream detected: "
               "use the audio bitstream filter 'aac_adtstoasc' to fix it "
               "('-bsf:a aac_adtstoasc' option with ffmpeg)\n");
        return AVERROR_INVALIDDATA;
        }
        av_log(s, AV_LOG_WARNING, "aac bitstream error\n");
    }

    /* check Speex packet duration */
    if (enc->codec_id == AV_CODEC_ID_SPEEX && ts - sc->last_ts > 160)
        av_log(s, AV_LOG_WARNING, "Warning: Speex stream has more than "
                                  "8 frames per packet. Adobe Flash "
                                  "Player cannot handle this!\n");

    if (sc->last_ts < ts)
        sc->last_ts = ts;

    if (size + flags_size >= 1<<24) {
        av_log(s, AV_LOG_ERROR, "Too large packet with size %u >= %u\n",
               size + flags_size, 1<<24);
        return AVERROR(EINVAL);
    }
    //Tag Header - Datasize
    avio_wb24(pb, size + flags_size);
    //Tag Header - Timestamp
    avio_wb24(pb, ts & 0xFFFFFF);
    avio_w8(pb, (ts >> 24) & 0x7F); // timestamps are 32 bits _signed_
    //StreamID
    avio_wb24(pb, flv->reserved);

    if (enc->codec_type == AVMEDIA_TYPE_DATA) {
        int data_size;
        int64_t metadata_size_pos = avio_tell(pb);
        if (enc->codec_id == AV_CODEC_ID_TEXT) {
            // legacy FFmpeg magic?
            avio_w8(pb, AMF_DATA_TYPE_STRING);
            put_amf_string(pb, "onTextData");
            avio_w8(pb, AMF_DATA_TYPE_MIXEDARRAY);
            avio_wb32(pb, 2);
            put_amf_string(pb, "type");
            avio_w8(pb, AMF_DATA_TYPE_STRING);
            put_amf_string(pb, "Text");
            put_amf_string(pb, "text");
            avio_w8(pb, AMF_DATA_TYPE_STRING);
            put_amf_string(pb, pkt->data);
            put_amf_string(pb, "");
            avio_w8(pb, AMF_END_OF_OBJECT);
        } else {
            // just pass the metadata through
            avio_write(pb, data ? data : pkt->data, size);
        }
        /* write total size of tag */
        data_size = avio_tell(pb) - metadata_size_pos;
        avio_seek(pb, metadata_size_pos - 10, SEEK_SET);
        avio_wb24(pb, data_size);
        avio_seek(pb, data_size + 10 - 3, SEEK_CUR);
        avio_wb32(pb, data_size + 11);
    } else {
        av_assert1(flags>=0);
        //First Byte of Tag Data
        avio_w8(pb,flags);
        if (enc->codec_id == AV_CODEC_ID_VP6)
            avio_w8(pb,0);
        if (enc->codec_id == AV_CODEC_ID_VP6F || enc->codec_id == AV_CODEC_ID_VP6A) {
            if (enc->extradata_size)
                avio_w8(pb, enc->extradata[0]);
            else
                avio_w8(pb, ((FFALIGN(enc->width,  16) - enc->width) << 4) |
                             (FFALIGN(enc->height, 16) - enc->height));
        } else if (enc->codec_id == AV_CODEC_ID_AAC)
            avio_w8(pb, 1); // AAC raw
        else if (enc->codec_id == AV_CODEC_ID_H264 || enc->codec_id == AV_CODEC_ID_MPEG4) {
            //AVCVIDEOPACKET-AVCPacketType
        	avio_w8(pb, 1); // AVC NALU
        	//AVCVIDEOPACKET-CompositionTime
            avio_wb24(pb, pkt->pts - pkt->dts);
        }
        //Data
        avio_write(pb, data ? data : pkt->data, size);

        avio_wb32(pb, size + flags_size + 11); // previous tag size
        flv->duration = FFMAX(flv->duration,
                              pkt->pts + flv->delay + pkt->duration);
    }

    av_free(data);

    return pb->error;
}

我們通過原始碼簡單梳理一下flv_write_packet()在寫入H.264/AAC時候的流程:
(1)寫入Tag Header的Type,如果是視訊,程式碼如下:
avio_w8(pb, FLV_TAG_TYPE_VIDEO);
如果是音訊,程式碼如下:
avio_w8(pb, FLV_TAG_TYPE_AUDIO);
(2)寫入Tag Header的Datasize,Timestamp和StreamID(至此完成Tag Header):
    //Tag Header - Datasize
    avio_wb24(pb, size + flags_size);
    //Tag Header - Timestamp
    avio_wb24(pb, ts & 0xFFFFFF);
    avio_w8(pb, (ts >> 24) & 0x7F); // timestamps are 32 bits _signed_
    //StreamID
    avio_wb24(pb, flv->reserved);
(3)寫入Tag Data的第一位元組(其中flag已經在前面的程式碼中設定完畢):
    //First Byte of Tag Data
    avio_w8(pb,flags);
(4)如果編碼格式VP6作相應的處理(不研究);編碼格式為AAC,寫入AACAUDIODATA;編碼格式為H.264,寫入AVCVIDEOPACKET:
        if (enc->codec_id == AV_CODEC_ID_VP6F || enc->codec_id == AV_CODEC_ID_VP6A) {
            if (enc->extradata_size)
                avio_w8(pb, enc->extradata[0]);
            else
                avio_w8(pb, ((FFALIGN(enc->width,  16) - enc->width) << 4) |
                             (FFALIGN(enc->height, 16) - enc->height));
        } else if (enc->codec_id == AV_CODEC_ID_AAC)
            avio_w8(pb, 1); // AAC raw
        else if (enc->codec_id == AV_CODEC_ID_H264 || enc->codec_id == AV_CODEC_ID_MPEG4) {
            //AVCVIDEOPACKET-AVCPacketType
        	avio_w8(pb, 1); // AVC NALU
        	//AVCVIDEOPACKET-CompositionTime
            avio_wb24(pb, pkt->pts - pkt->dts);
        }
(5)寫入資料:
        //Data
        avio_write(pb, data ? data : pkt->data, size);
(6)寫入previous tag size:
avio_wb32(pb, size + flags_size + 11); // previous tag size
至此,flv_write_packet()就完成了一個Tag的寫入。

雷霄驊
[email protected]
http://blog.csdn.net/leixiaohua1020

相關推薦

FFmpeg原始碼簡單分析av_write_frame()

=====================================================FFmpeg的庫函式原始碼分析文章列表:【架構圖】【通用】【解碼】【編碼】【其它】【指令碼】【H.264】================================

ffmpeg 原始碼簡單分析 av_read_frame()

此前寫了好幾篇ffmpeg原始碼分析文章,列表如下: ============================ ffmpeg中的av_read_frame()的作用是讀取碼流中的音訊若干幀或者視訊一幀。例如,解碼視訊的時

ffmpeg 原始碼簡單分析 av_register_all()

此前寫了好幾篇ffmpeg原始碼分析文章,列表如下: ============================ 前一陣子看了一下ffmpeg的原始碼,並且做了一些註釋,在此貼出來以作備忘。 本文分析一下ffmpeg註冊

FFmpeg原始碼簡單分析makefile

=====================================================FFmpeg的庫函式原始碼分析文章列表:【架構圖】【通用】【解碼】【編碼】【其它】【指令碼】【H.264】================================

FFmpeg原始碼簡單分析結構體成員管理系統-AVClass

=====================================================FFmpeg的庫函式原始碼分析文章列表:【架構圖】【通用】【解碼】【編碼】【其它】【指令碼】【H.264】================================

FFmpeg原始碼簡單分析avcodec_open2()

=====================================================FFmpeg的庫函式原始碼分析文章列表:【架構圖】【通用】【解碼】【編碼】【其它】【指令碼】【H.264】================================

FFmpeg原始碼簡單分析avformat_close_input()

=====================================================FFmpeg的庫函式原始碼分析文章列表:【架構圖】【通用】【解碼】【編碼】【其它】【指令碼】【H.264】================================

FFmpeg原始碼簡單分析常見結構體的初始化和銷燬(AVFormatContext,AVFrame等)

=====================================================FFmpeg的庫函式原始碼分析文章列表:【架構圖】【通用】【解碼】【編碼】【其它】【指令碼】【H.264】================================

FFmpeg原始碼簡單分析記憶體的分配和釋放(av_malloc()、av_free()等)

=====================================================FFmpeg的庫函式原始碼分析文章列表:【架構圖】【通用】【解碼】【編碼】【其它】【指令碼】【H.264】================================

FFmpeg原始碼簡單分析avcodec_encode_video()

=====================================================FFmpeg的庫函式原始碼分析文章列表:【架構圖】【通用】【解碼】【編碼】【其它】【指令碼】【H.264】================================

FFmpeg原始碼簡單分析configure

=====================================================FFmpeg的庫函式原始碼分析文章列表:【架構圖】【通用】【解碼】【編碼】【其它】【指令碼】【H.264】================================

FFmpeg的H.264解碼器原始碼簡單分析概述

=====================================================H.264原始碼分析文章列表:【編碼 - x264】【解碼 - libavcodec H.264 解碼器】================================

FFmpeg的HEVC解碼器原始碼簡單分析CTU解碼(CTU Decode)部分-TU

=====================================================HEVC原始碼分析文章列表:【解碼 -libavcodec HEVC 解碼器】==============================================

FFmpeg的H.264解碼器原始碼簡單分析解碼器主幹部分

=====================================================H.264原始碼分析文章列表:【編碼 - x264】【解碼 - libavcodec H.264 解碼器】================================

FFmpeg的H.264解碼器原始碼簡單分析熵解碼(Entropy Decoding)部分

=====================================================H.264原始碼分析文章列表:【編碼 - x264】【解碼 - libavcodec H.264 解碼器】================================

H264編碼器5( x264原始碼簡單分析x264_slice_write() 與H264 編碼簡介)

  x264原始碼簡單分析:x264_slice_write() 來自:https://blog.csdn.net/leixiaohua1020/article/details/45536607     H264 編碼簡介 https://blo

H264編碼器4( x264原始碼簡單分析概述)

來自:https://blog.csdn.net/leixiaohua1020/article/details/45536607   ===================================================== H.264原始碼分析文章列表:

FFmpeg原始碼簡單分析 avio open2

                =====================================================FFmpeg的庫函式原始碼分析文章列表:【架構圖】【通用】【解碼】【編碼】【其它】【指令碼】【H.264】=================================

FFmpeg原始碼簡單分析 avcodec encode video

                =====================================================FFmpeg的庫函式原始碼分析文章列表:【架構圖】【通用】【解碼】【編碼】【其它】【指令碼】【H.264】=================================

FFmpeg源代碼簡單分析常見結構體的初始化和銷毀(AVFormatContext,AVFrame等)

new init _array border 代碼 alloc ecc .com VC 結構體 初始化 銷毀 AVFormatContext avformat_alloc_context() avfo