1. 程式人生 > >AVFoundation Programming Guide(官方文件翻譯6)Export

AVFoundation Programming Guide(官方文件翻譯6)Export

Export - 輸出

To read and write audiovisual assets, you must use the export APIs provided by the AVFoundation framework. The AVAssetExportSession class provides an interface for simple exporting needs, such as modifying the file format or trimming the length of an asset (see Trimming and Transcoding a Movie). For more in-depth exporting needs, use the AVAssetReader and AVAssetWriter classes.

Use an AVAssetReader when you want to perform an operation on the contents of an asset. For example, you might read the audio track of an asset to produce a visual representation of the waveform. To produce an asset from media such as sample buffers or still images, use an AVAssetWriter object.

當你想對一項資產的內容進行操作時,使用 AVAssetReader

。例如,可以讀取一個資產的音訊軌道,以產生波形的視覺化表示。為了從媒體(比如樣品緩衝或者靜態影象)生成資產,使用 AVAssetWriter 物件。

Note: The asset reader and writer classes are not intended to be used for real-time processing. In fact, an asset reader cannot even be used for reading from a real-time source like an HTTP live stream. However, if you are using an asset writer with a real-time data source, such as an AVCaptureOutput object, set the expectsMediaDataInRealTime property of your asset writer’s inputs to YES. Setting this property to YES for a non-real-time data source will result in your files not being interleaved properly.

注意:資產 readerwriter 類不打算用到實時處理。實際上,一個資產讀取器甚至不能用於從一個類似 HTTP 直播流的實時資源中讀取。然而,如果你使用帶著實時資料資源的資產寫入器,比如 AVCaptureOutput 物件,設定資產寫入器入口的 expectsMediaDataInRealTime 屬性為 YES。將此屬性設定為 YES 的非實時資料來源將導致你的檔案不能被正確的掃描。

Reading an Asset - 讀取資產

Each AVAssetReader object can be associated only with a single asset at a time, but this asset may contain multiple tracks. For this reason, you must assign concrete subclasses of the AVAssetReaderOutput class to your asset reader before you begin reading in order to configure how the media data is read. There are three concrete subclasses of the AVAssetReaderOutput base class that you can use for your asset reading needs: AVAssetReaderTrackOutput, AVAssetReaderAudioMixOutput, and AVAssetReaderVideoCompositionOutput.

Creating the Asset Reader - 建立資產讀取器

All you need to initialize an AVAssetReader object is the asset that you want to read.

所有你需要去初始化 AVAssetReader 物件是你想要訪問的資產。

NSError *outError;
AVAsset *someAsset = <#AVAsset that you want to read#>;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];
BOOL success = (assetReader != nil);

Note: Always check that the asset reader returned to you is non-nil to ensure that the asset reader was initialized successfully. Otherwise, the error parameter (outError in the previous example) will contain the relevant error information.

注意:總是要資產讀取器是否返回給你的時 non-nil ,以確保資產讀取器已經成功被初始化。否則,錯誤引數(之前的例子中 outError)將會包含有關錯誤的資訊。

Setting Up the Asset Reader Outputs - 建立資產讀取器出口

After you have created your asset reader, set up at least one output to receive the media data being read. When setting up your outputs, be sure to set the alwaysCopiesSampleData property to NO. In this way, you reap the benefits of performance improvements. In all of the examples within this chapter, this property could and should be set to NO.

在你建立了資產讀取器之後,至少設定一個出口以接收正在讀取的媒體資料。當建立你的出口,確保設定 alwaysCopiesSampleData 屬性為 NO。這樣,你就收穫了效能改進的好處。這一章的所有例子中,這個屬性可以並且應該被設定為 NO

If you want only to read media data from one or more tracks and potentially convert that data to a different format, use the AVAssetReaderTrackOutput class, using a single track output object for each AVAssetTrack object that you want to read from your asset. To decompress an audio track to Linear PCM with an asset reader, you set up your track output as follows:

如果你只想從一個或多個軌道讀取媒體資料,潛在的資料轉換為不同的格式,使用 AVAssetReaderTrackOutput 類,每個你想從你的資產中讀取 AVAssetTrack 物件都使用單軌道出口物件。將音訊軌道解壓縮為有資產讀取器的 Linear PCM ,建立軌道出口如下:

AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
if ([assetReader canAddOutput:trackOutput])
    [assetReader addOutput:trackOutput];

Note: To read the media data from a specific asset track in the format in which it was stored, pass nil to the outputSettings parameter.

注意:從一個特定的資產軌道讀取媒體資料,以它被儲存的格式,傳 niloutputSettings 引數。

You use the AVAssetReaderAudioMixOutput and AVAssetReaderVideoCompositionOutput classes to read media data that has been mixed or composited together using an AVAudioMix object or AVVideoComposition object, respectively. Typically, these outputs are used when your asset reader is reading from an AVComposition object.

使用 AVAssetReaderAudioMixOutputAVAssetReaderVideoCompositionOutput 類來讀取媒體資料,這些媒體資料是分別使用 AVAudioMix 物件或者 AVVideoComposition 物件混合或者組合在一起。通常情況下,當你的資產讀取器正在從 AVComposition 讀取時,才使用這些出口。

With a single audio mix output, you can read multiple audio tracks from your asset that have been mixed together using an AVAudioMix object. To specify how the audio tracks are mixed, assign the mix to the AVAssetReaderAudioMixOutput object after initialization. The following code displays how to create an audio mix output with all of the audio tracks from your asset, decompress the audio tracks to Linear PCM, and assign an audio mix object to the output. For details on how to configure an audio mix, see Editing.

一個單一音訊混合出口,可以從 已經使用 AVAudioMix 物件混合在一起的資產中讀取多個音軌。指定音軌是如何被混合在一起的,將混合後的 AVAssetReaderAudioMixOutput 物件初始化。下面的程式碼顯示瞭如何從資產中建立一個帶著所有音軌的音訊混合出口,將音軌解壓為 Linear PCM,並指定音訊混合物件到出口。有如何配置音訊混合的細節,請參見 Editing

AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;
// Assumes that assetReader was initialized with an AVComposition object.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the audio tracks to read.
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];
// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the audio mix output with the audio tracks and decompression setttings.
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];
// Associate the audio mix used to mix the audio tracks being read with the output.
audioMixOutput.audioMix = audioMix;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:audioMixOutput])
    [assetReader addOutput:audioMixOutput];

Note: Passing nil for the audioSettings parameter tells the asset reader to return samples in a convenient uncompressed format. The same is true for the AVAssetReaderVideoCompositionOutput class.

注意:給 audioSettings 引數傳遞 nil ,告訴資產讀取器返回一個方便的未壓縮格式的樣本。對於 AVAssetReaderVideoCompositionOutput 類同樣是可以的。

The video composition output behaves in much the same way: You can read multiple video tracks from your asset that have been composited together using an AVVideoComposition object. To read the media data from multiple composited video tracks and decompress it to ARGB, set up your output as follows:

視訊合成輸出行為有許多同樣的方式:可以從資產(已經被使用 AVVideoComposition 物件合併在一起)讀取多個視訊軌道。從多個複合視訊軌道讀取媒體資料,解壓縮為 ARGB ,建立出口如下:

AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;
// Assumes assetReader was initialized with an AVComposition.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the video tracks to read.
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];
// Decompression settings for ARGB.
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };
// Create the video composition output with the video tracks and decompression setttings.
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];
// Associate the video composition used to composite the video tracks being read with the output.
videoCompositionOutput.videoComposition = videoComposition;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:videoCompositionOutput])
    [assetReader addOutput:videoCompositionOutput];

Reading the Asset’s Media Data - 讀取資產媒體資料

To start reading after setting up all of the outputs you need, call the startReading method on your asset reader. Next, retrieve the media data individually from each output using the copyNextSampleBuffer method. To start up an asset reader with a single output and read all of its media samples, do the following:

開始讀取後建立所有你需要的出口,在你的資產讀取器中呼叫 startReading 方法。下一步,使用 copyNextSampleBuffer 方法從每個出口分別獲取媒體資料。以一個出口啟動一個資產讀取器,並讀取它的所有媒體樣本,跟著下面做:

// Start the asset reader up.
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
  // Copy the next sample buffer from the reader output.
  CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
  if (sampleBuffer)
  {
    // Do something with sampleBuffer here.
    CFRelease(sampleBuffer);
    sampleBuffer = NULL;
  }
  else
  {
    // Find out why the asset reader output couldn't copy another sample buffer.
    if (self.assetReader.status == AVAssetReaderStatusFailed)
    {
      NSError *failureError = self.assetReader.error;
      // Handle the error here.
    }
    else
    {
      // The asset reader output has read all of its samples.
      done = YES;
    }
  }
}

Writing an Asset - 寫入資產

The AVAssetWriter class to write media data from multiple sources to a single file of a specified file format. You don’t need to associate your asset writer object with a specific asset, but you must use a separate asset writer for each output file that you want to create. Because an asset writer can write media data from multiple sources, you must create an AVAssetWriterInput object for each individual track that you want to write to the output file. Each AVAssetWriterInput object expects to receive data in the form of CMSampleBufferRef objects, but if you want to append CVPixelBufferRef objects to your asset writer input, use the AVAssetWriterInputPixelBufferAdaptor class.

AVAssetWriter 類從多個源將媒體資料寫入到指定檔案格式的單個檔案中。不需要將你的資產寫入器與一個特定的資產聯絡起來,但你必須為你要建立的每個輸出檔案 使用一個獨立的資產寫入器。因為一個資產寫入器可以從多個來源寫入媒體資料,你必須為你想寫入輸出檔案的每個獨立的軌道建立一個 AVAssetWriterInput 物件。每個 AVAssetWriterInput 物件預計以 CMSampleBufferRef 物件的形成接收資料,但如果你想給你的資產寫入器入口 附加 CVPixelBufferRef 物件,使用 AVAssetWriterInputPixelBufferAdaptor 類。

Creating the Asset Writer - 建立資產寫入器

To create an asset writer, specify the URL for the output file and the desired file type. The following code displays how to initialize an asset writer to create a QuickTime movie:

為了建立一個資產寫入器,為出口檔案指定 URL 和所需的檔案型別。下面的程式碼顯示瞭如何初始化一個資產寫入器來建立一個 QuickTime 影片:

NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL
                                                      fileType:AVFileTypeQuickTimeMovie
                                                         error:&outError];
BOOL success = (assetWriter != nil);

Setting Up the Asset Writer Inputs - 建立資產寫入器入口

For your asset writer to be able to write media data, you must set up at least one asset writer input. For example, if your source of media data is already vending media samples as CMSampleBufferRef objects, just use the AVAssetWriterInput class. To set up an asset writer input that compresses audio media data to 128 kbps AAC and connect it to your asset writer, do the following:

為你的資產寫入器能夠寫入媒體資料,必須至少設定一個資產寫入器入口。例如,如果你的媒體資料來源已經以 CMSampleBufferRef 物件聲明瞭聲明瞭媒體樣本,只使用 AVAssetWriterInput 類。建立一個資產寫入器入口,將音訊媒體資料壓縮到 128 kbps AAC 並且將它與你的資產寫入器連線,跟著下面做:

// Configure the channel layout as stereo.
AudioChannelLayout stereoChannelLayout = {
    .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
    .mChannelBitmap = 0,
    .mNumberChannelDescriptions = 0
};

// Convert the channel layout object to an NSData object.
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];

// Get the compression settings for 128 kbps AAC.
NSDictionary *compressionAudioSettings = @{
    AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
    AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
    AVSampleRateKey       : [NSNumber numberWithInteger:44100],
    AVChannelLayoutKey    : channelLayoutAsData,
    AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};

// Create the asset writer input with the compression settings and specify the media type as audio.
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];
// Add the input to the writer if possible.
if ([assetWriter canAddInput:assetWriterInput])
    [assetWriter addInput:assetWriterInput];

Note: If you want the media data to be written in the format in which it was stored, pass nil in the outputSettings parameter. Pass nil only if the asset writer was initialized with a fileType of AVFileTypeQuickTimeMovie.

注意:如果你想讓媒體資料以它被儲存的格式寫入,給 outputSettings 引數傳 nil。只有資產寫入器曾用 AVFileTypeQuickTimeMoviefileType 初始化,才傳nil

Your asset writer input can optionally include some metadata or specify a different transform for a particular track using the metadata and transform properties respectively. For an asset writer input whose data source is a video track, you can maintain the video’s original transform in the output file by doing the following:

你的資產寫入器入口可以選擇性的包含一些元資料 或者 分別使用 metadatatransform 屬性為特定的軌道指定不同的變換。對於一個資產寫入器的入口,其資料來源是一個視訊軌道,可以通過下面示例來在輸出檔案中維持視訊的原始變換:

AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;

Note: Set the metadata and transform properties before you begin writing with your asset writer for them to take effect.

注意:在開始用資產寫入器寫入生效之前,先設定 metadatatransform 屬性。

When writing media data to the output file, sometimes you may want to allocate pixel buffers. To do so, use the AVAssetWriterInputPixelBufferAdaptor class. For greatest efficiency, instead of adding pixel buffers that were allocated using a separate pool, use the pixel buffer pool provided by the pixel buffer adaptor. The following code creates a pixel buffer object working in the RGB domain that will use CGImage objects to create its pixel buffers.

當將媒體資料寫入輸出檔案時,有時你可能要分配畫素緩衝區。這樣做:使用 AVAssetWriterInputPixelBufferAdaptor 類。為了最大的效率,使用由畫素緩衝介面卡提供的畫素緩衝池,代替新增被分配使用一個單獨池的畫素緩衝區。下面的程式碼建立一個畫素緩衝區物件,在 RGB 色彩下工作,將使用 CGImage 物件建立它的畫素緩衝。

NSDictionary *pixelBufferAttributes = @{
     kCVPixelBufferCGImageCompatibilityKey : [NSNumber numberWithBool:YES],
     kCVPixelBufferCGBitmapContextCompatibilityKey : [NSNumber numberWithBool:YES],
     kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor *inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];

Note: All AVAssetWriterInputPixelBufferAdaptor objects must be connected to a single asset writer input. That asset writer input must accept media data of type AVMediaTypeVideo.

注:所有的 AVAssetWriterInputPixelBufferAdaptor 物件必須連線到一個單獨的資產寫入器入口。資產寫入器入口必須接受 AVMediaTypeVideo 型別的媒體資料。

Writing Media Data - 寫入媒體資料

When you have configured all of the inputs needed for your asset writer, you are ready to begin writing media data. As you did with the asset reader, initiate the writing process with a call to the startWriting method. You then need to start a sample-writing session with a call to the startSessionAtSourceTime: method. All writing done by an asset writer has to occur within one of these sessions and the time range of each session defines the time range of media data included from within the source. For example, if your source is an asset reader that is supplying media data read from an AVAsset object and you don’t want to include media data from the first half of the asset, you would do the following:

當你已經為資產寫入器配置所有需要的入口時,這時已經準備好開始寫入媒體資料。正如在資產讀取器所做的,呼叫 startWriting 方法發起寫入過程。然後你需要啟動一個樣本 – 呼叫 startSessionAtSourceTime: 方法的寫入會話。資產寫入器的所有寫入都必須在這些會話中發生,並且每個會話的時間範圍 定義 包含在來源內媒體資料的時間範圍。例如,如果你的來源是一個資產讀取器(它從 AVAsset 物件讀取到供應的媒體資料),並且你不想包含來自資產的前半部分的媒體資料,你可以像下面這樣做:

CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration, 0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//Implementation continues.

Normally, to end a writing session you must call the endSessionAtSourceTime: method. However, if your writing session goes right up to the end of your file, you can end the writing session simply by calling the finishWriting method. To start up an asset writer with a single input and write all of its media data, do the following:

通常,必須呼叫 endSessionAtSourceTime: 方法結束寫入會話。然而,如果你的寫入會話正確走到了你的檔案末尾,可以簡單地通過呼叫 finishWriting 方法來結束寫入會話。要啟動一個有單一入口的資產寫入器並且寫入所有媒體資料。下面示例:

// Prepare the asset writer for writing.
[self.assetWriter startWriting];
// Start a sample-writing session.
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
     while ([self.assetWriterInput isReadyForMoreMediaData])
     {
          // Get the next sample buffer.
          CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
          if (nextSampleBuffer)
          {
               // If it exists, append the next sample buffer to the output file.
               [self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
               CFRelease(nextSampleBuffer);
               nextSampleBuffer = nil;
          }
          else
          {
               // Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
               [self.assetWriterInput markAsFinished];
               break;
          }
     }
}];

The copyNextSampleBufferToWrite method in the code above is simply a stub. The location of this stub is where you would need to insert some logic to return CMSampleBufferRef objects representing the media data that you want to write. One possible source of sample buffers is an asset reader output.

上述程式碼中的 copyNextSampleBufferToWrite 方法僅僅是一個 stub。這個 stub 的位置就是你需要插入一些邏輯 去返回 CMSampleBufferRef 物件 表示你想要寫入的媒體資料。示例緩衝區的可能來源是一個資產讀取器出口。

Reencoding Assets - 重新編碼資產

You can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects, you have more control over the conversion than you do with an AVAssetExportSession object. For example, you can choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process. The first step in this process is just to set up your asset reader outputs and asset writer inputs as desired. After your asset reader and writer are fully configured, you start up both of them with calls to the startReading and startWriting methods, respectively. The following code snippet displays how to use a single asset writer input to write media data supplied by a single asset reader output:

可以使用資產讀取器和資產寫入器物件,以一個表現轉換到另一個表現的資產。使用這些物件,你必須比用 AVAssetExportSession 物件有更多的控制轉換。例如,你可以選擇輸出檔案中想要顯示的軌道,指定你自己的輸出格式,或者在轉換過程中修改該資產。這個過程中第一步是按需建立你的資產讀取器出口和資產寫入器入口。資產讀取器和寫入器充分配置後,分別呼叫 startReadingstartWriting 方法啟動它們。下面的程式碼片段顯示瞭如何使用一個單一的資產寫入器入口去寫入 由一個單一的資產讀取器出口提供的媒體資料:

NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create a serialization queue for reading and writing.
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);

// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
     while ([self.assetWriterInput isReadyForMoreMediaData])
     {
          // Get the asset reader output's next sample buffer.
          CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
          if (sampleBuffer != NULL)
          {
               // If it exists, append this sample buffer to the output file.
               BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer];
               CFRelease(sampleBuffer);
               sampleBuffer = NULL;
               // Check for errors that may have occurred when appending the new sample buffer.
               if (!success && self.assetWriter.status == AVAssetWriterStatusFailed)
               {
                    NSError *failureError = self.assetWriter.error;
                    //Handle the error.
               }
          }
          else
          {
               // If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one.
               if (self.assetReader.status == AVAssetReaderStatusFailed)
               {
                    NSError *failureError = self.assetReader.error;
                    //Handle the error here.
               }
               else
               {
                    // The asset reader output must have vended all of its samples. Mark the input as finished.
                    [self.assetWriterInput markAsFinished];
                    break;
               }
          }
     }
}];

Putting It All Together: Using an Asset Reader and Writer in Tandem to Reencode an Asset - 總結:使用資產讀取器和寫入器串聯重新編碼資產

This brief code example illustrates how to use an asset reader and writer to reencode the first video and audio track of an asset into a new file. It shows how to:

  • Use serialization queues to handle the asynchronous nature of reading and writing audiovisual data
  • Initialize an asset reader and configure two asset reader outputs, one for audio and one for video
  • Initialize an asset writer and configure two asset writer inputs, one for audio and one for video
  • Use an asset reader to asynchronously supply media data to an asset writer through two different - output/input combinations
  • Use a dispatch group to be notified of completion of the reencoding process
  • Allow a user to cancel the reencoding process once it has begun

這個剪短的程式碼示例說明如何使用資產讀取器和寫入器將一個資產的第一個視訊和音訊軌道重新編碼 到一個新檔案。它展示了:

  • 使用序列化佇列來處理讀寫視聽資料的非同步性
  • 初始化一個資產讀取器,並配置兩個資產讀取器出口,一個用於音訊,一個用於視訊
  • 初始化一個資產寫入器,並配置兩個資產寫入器入口,一個用於音訊,一個用於視訊
  • 使用一個資產讀取器,通過兩個不同的 輸出/輸入組合來異步向資產寫入器提供媒體資料
  • 使用一個排程組接收重新編碼過程的完成的通知
  • 一旦開始,允許使用者取消重新編碼過程

Note: To focus on the most relevant code, this example omits several aspects of a complete application. To use AVFoundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces.

注:關注最相關的程式碼,這個例子中省略了一個完成應用程式的幾個方面。為了使用 AVFoundation ,希望你有足夠的 Cocoa 經驗,能夠推斷缺少的程式碼。

Handling the Initial Setup - 處理初始設定

Before you create your asset reader and writer and configure their outputs and inputs, you need to handle some initial setup. The first part of this setup involves creating three separate serialization queues to coordinate the reading and writing process.

在建立資產讀取器和寫入器和配置它們的出口和入口之前,你需要處理一下初始設定。此設定的第一部分包括建立3個獨立的序列化佇列來協調讀寫過程。

NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create the main serialization queue.
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];

// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];

// Create the serialization queue to use for reading and writing the video data.
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

The main serialization queue is used to coordinate the starting and stopping of the asset reader and writer (perhaps due to cancellation) and the other two serialization queues are used to serialize the reading and writing by each output/input combination with a potential cancellation.

主序列佇列用於協調資產讀取器和寫入器(可能是由於登出)的啟動和停止,其他兩個序列佇列用於序列化讀取器和寫入器,通過每一個有潛在登出的輸入/輸出組合。

Now that you have some serialization queues, load the tracks of your asset and begin the reencoding process.

現在你有一些序列化佇列,載入你的資產軌道,並開始重新編碼過程。

self.asset = <#AVAsset that you want to reencode#>;
self.cancelled = NO;
self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;
// Asynchronously load the tracks of the asset you want to read.
[self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{
     // Once the tracks have finished loading, dispatch the work to the main serialization queue.
     dispatch_async(self.mainSerializationQueue, ^{
          // Due to asynchronous nature, check to see if user has already cancelled.
          if (self.cancelled)
               return;
          BOOL success = YES;
          NSError *localError = nil;
          // Check for success of loading the assets tracks.
          success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
          if (success)
          {
               // If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
               NSFileManager *fm = [NSFileManager defaultManager];
               NSString *localOutputPath = [self.outputURL path];
               if ([fm fileExistsAtPath:localOutputPath])
                    success = [fm removeItemAtPath:localOutputPath error:&localError];
          }
          if (success)
               success = [self setupAssetReaderAndAssetWriter:&localError];
          if (success)
               success = [self startAssetReaderAndWriter:&localError];
          if (!success)
               [self readingAndWritingDidFinishSuccessfully:success withError:localError];
     });
}];

When the track loading process finishes, whether successfully or not, the rest of the work is dispatched to the main serialization queue to ensure that all of this work is serialized with a potential cancellation. Now all that’s left is to implement the cancellation process and the three custom methods at the end of the previous code listing.

當軌道載入過程結束後,無論成功與否,剩下的工作就是被分配到主序列佇列以確保所有的工作都是有潛在登出的序列化。現在,剩下就是實現登出程序和前面的程式碼清單的結尾處的3個自定義方法。

Initializing the Asset Reader and Writer - 初始化資產讀取器和寫入器

The custom setupAssetReaderAndAssetWriter: method initializes the reader and writer and configures two output/input combinations, one for an audio track and one for a video track. In this example, the audio is decompressed to Linear PCM using the asset reader and compressed back to 128 kbps AAC using the asset writer. The video is decompressed to YUV using the asset reader and compressed to H.264 using the asset writer.

自定義 setupAssetReaderAndAssetWriter: 方法初始化讀取器和寫入器,並且配置兩個輸入/輸出組合,一個用於音訊軌道,一個用於視訊軌道。在這個例子中,使用資產讀取器音訊被解壓縮到 Linear PCM ,使用資產寫入器壓縮回 128 kbps AAC 。使用資產讀取器將視訊解壓縮到 YUV ,使用資產寫入器壓縮為 H.264


- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
    // Create and initialize the asset reader.
    self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
    BOOL success = (self.assetReader != nil);
    if (success)
    {
        // If the asset reader was successfully initialized, do the same for the asset writer.
        self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL
                                                     fileType:AVFileTypeQuickTimeMovie
                                                        error:outError];
        success = (self.assetWriter != nil);
    }

    if (success)
    {
        // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
        AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
        NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
        if ([audioTracks count] > 0)
            assetAudioTrack = [audioTracks objectAtIndex:0];
        NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
        if ([videoTracks count] > 0)
            assetVideoTrack = [videoTracks objectAtIndex:0];

        if (assetAudioTrack)
        {
            // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
            NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
            self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack
                                                                                     outputSettings:decompressionAudioSettings];
            [self.assetReader addOutput:self.assetReaderAudioOutput];
            // Then, set the compression settings to 128kbps AAC and create the asset writer input.
            AudioChannelLayout stereoChannelLayout = {
                .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
                .mChannelBitmap = 0,
                .mNumberChannelDescriptions = 0
            };
            NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
            NSDictionary *compressionAudioSettings = @{
                                                       AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
                                                       AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
                                                       AVSampleRateKey       : [NSNumber numberWithInteger:44100],
                                                       AVChannelLayoutKey    : channelLayoutAsData,
                                                       AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
                                                       };
            self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType]
                                                                            outputSettings:compressionAudioSettings];
            [self.assetWriter addInput:self.assetWriterAudioInput];
        }

        if (assetVideoTrack)
        {
            // If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
            NSDictionary *decompressionVideoSettings = @{
                                                         (id)kCVPixelBufferPixelFormatTypeKey     : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
                                                         (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
                                                         };
            self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack
                                                                                     outputSettings:decompressionVideoSettings];
            [self.assetReader addOutput:self.assetReaderVideoOutput];
            CMFormatDescriptionRef formatDescription = NULL;
            // Grab the video format descriptions from the video track and grab the first one if it exists.
            NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
            if ([videoFormatDescriptions count] > 0)
                formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
            CGSize trackDimensions = {
                .width = 0.0,
                .height = 0.0,
            };
            // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
            if (formatDescription)
                trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
            else
                trackDimensions = [assetVideoTrack naturalSize];
            NSDictionary *compressionSettings = nil;
            // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
            if (formatDescription)
            {
                NSDictionary *cleanAperture = nil;
                NSDictionary *pixelAspectRatio = nil;
                CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
                if (cleanApertureFromCMFormatDescription)
                {
                    cleanAperture = @{
                                      AVVideoCleanApertureWidthKey            : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
                                      AVVideoCleanApertureHeightKey           : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
                                      AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
                                      AVVideoCleanApertureVerticalOffsetKey   : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
                                      };
                }
                CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
                if (pixelAspectRatioFromCMFormatDescription)
                {
                    pixelAspectRatio = @{
                                         AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
                                         AVVideoPixelAspectRatioVerticalSpacingKey   : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
                                         };
                }
                // Add whichever settings we could grab from the format description to the compression settings dictionary.
                if (cleanAperture || pixelAspectRatio)
                {
                    NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
                    if (cleanAperture)
                        [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
                    if (pixelAspectRatio)
                        [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
                    compressionSettings = mutableCompressionSettings;
                }
            }
            // Create the video settings dictionary for H.264.
            NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
                                                                           AVVideoCodecKey  : AVVideoCodecH264,
                                                                           AVVideoWidthKey  : [NSNumber numberWithDouble:trackDimensions.width],
                                                                           AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
                                                                           };
            // Put the compression settings into the video settings dictionary if we were able to grab them.
            if (compressionSettings)
                [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
            // Create the asset writer input and add it to the asset writer.
            self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType]
                                                                            outputSettings:videoSettings];
            [self.assetWriter addInput:self.assetWriterVideoInput];
        }
    }
    return success;
}

相關推薦

AVFoundation Programming Guide(官方翻譯6)Export

Export - 輸出 To read and write audiovisual assets, you must use the export APIs provided by the AVFoundation framework. The AVAss

AVFoundation Programming Guide(官方翻譯)完整版中英對照

About AVFoundation - AVFoundation概述 AVFoundation is one of several frameworks that you can use to play and create time-based a

Spark SQL,DataFrames and DataSets Guide官方翻譯

通過使用saveAsTable命令,可以將DataFrames持久化到表中。已有的Hive部署不需要使用這個特性,Spark將會建立一個預設的本地Hive 元資料庫。和createOrRepalceTempView不同的是,saveAsTable會實體化DataFrame的內容,並且會在Hive 元資料庫中

Spark官方翻譯:Spark Programming Guide(一)

Overview At a high level, every Spark application consists of a driver program that runs the user’s main function and executes var

Spark1.6.0官方翻譯01--Spark Overview

Spark Overview Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python

Spring官方翻譯(1~6章)

Spring框架是一個輕量級的解決方案,可以一站式地構建企業級應用。Spring是模組化的,所以可以只使用其中需要的部分。可以在任何web框架上使用控制反轉(IoC),也可以只使用Hibernate整合程式碼或JDBC抽象層。它支援宣告式事務管理、通

ABP官方翻譯 6.1.2 MVC視圖

tar strong http span 繼承 ant net 方法 mvc視圖 ASP.NET MVC 視圖 介紹 AbpWebViewPage基類 介紹   ABP通過Abp.Web.Mvc nuget包集成到MVC視圖。你可以如往常一樣創建正常的MVC

ABP官方翻譯 6.1.3 異常處理

lan nth 響應 friendly dex tps 過濾 rod geturl 處理異常 介紹 啟用錯誤處理 Non-Ajax請求 顯示異常 UserFriendlyException Error模型 AJAX請求 異常事件 介紹

ABP官方翻譯 6.2.1 ASP.NET Core集成

mic 模型 binder let 轉換 span optional document clas ASP.NET Core 介紹 遷移到ASP.NET Core? 啟動模板 配置 啟動類 模塊配置 控制器 應用服務作為控制器

ABP官方翻譯 6.3 本地化

round 找到 factory 版本 assign reg 允許 mine man 本地化 介紹 應用程序語言 本地化源 XML文件 註冊XML本地化源 JSON文件 註冊JSON本地化源 資源文件 自定義源 當

ABP官方翻譯 6.4 導航

tro gist perm 目的 name blog 註入 chart 客戶端 導航 創建菜單 註冊導航提供者 顯示菜單   每一個網絡應用都會有一些菜單用來在pages/screens之間導航。ABP提供了通用的基礎設施來創建並顯示菜單。 創

#pragma comment 官方翻譯

一 格式 #pragma comment(comment-type [,commentstring] ) 作用 Places a comment record into an object file or executable file. 翻譯:將一個註釋記錄放入一個object檔案

著色器shader官方翻譯

近期打算好好學習shader,期望能做出大海波浪等絢麗的效果。 官方文件的翻譯,算是我的附帶產出,增強對shader的瞭解,也是為後人參考學習提供捷徑。 ---------------------------------------------------------------------

Hibernate官方翻譯-(第二章,入門)

第二章:使用原生Hibernate API 和hbm.xml對映 (示例程式碼下載地址http://sourceforge.net/projects/hibernate/files/hibernate4/) 2.1. Hibernate configuration 檔案 資原始檔hibern

Android官方翻譯PagerAdapter

PagerAdapter public abstract class PagerAdapter extends Object (公共抽象類 繼承自object) java.lang.Object ↳ androidx.viewpager.widget.PagerAdapter

【pySerial3.4官方6、示例

示例 Miniterm  Miniterm現在可用作模組而不是示例。有關詳細資訊,請參閱serial.tools.miniterm。 miniterm.py miniterm計劃。 setup-miniterm-py2exe.py 這是Windows的py2exe安

【Gradle官方翻譯】起步2:建立構建掃描

構建掃描是對構建的可分享的專門記錄,可以看到“構建中發生了那些行為以及為什麼會發生這種行為”。通過在專案中使用構建掃描外掛,開發者可以免費地在https://scans.gradle.com/上釋出構建掃描。 將要建立的 本文會展示如何在不對任何構建指令碼進行

Pulsar官方翻譯-入門必看-概念和架構-(一)概覽(Pulsar Overview)

官網原文標題《Concepts and Architecture--Pulsar Overview》 翻譯時間:2018-09-28 譯者:本文介紹了Pulsar的起源和現狀,以及主要特性。 後續閱讀:《Messaging Concepts》 譯者序言: 由

Hyperledger Fabric 1.3 官方翻譯(三)關鍵概念 (Key Concepts)

身份(Identity) 什麼是身份(What is an Identity)? The different actors in a blockchain network include peers, orderers, client applications,

HDFS架構指南(Hadoop官方翻譯

HDFS架構指南 本文翻譯自《HDFS Architecture Guide》 來源於Apache開源社群的Hadoop Apache Project 文獻引用為: Borthakur D. HDFS architecture guide[J]. Hadoop