1. 程式人生 > >AVFoundation Programming Guide(官方文件翻譯)完整版中英對照

AVFoundation Programming Guide(官方文件翻譯)完整版中英對照

About AVFoundation - AVFoundation概述

AVFoundation is one of several frameworks that you can use to play and create time-based audiovisual media. It provides an Objective-C interface you use to work on a detailed level with time-based audiovisual data. For example, you can use it to examine, create, edit, or reencode media files. You can also get input streams from devices and manipulate video during realtime capture and playback. Figure I-1 shows the architecture on iOS.

AVFoundation 是可以用它來播放和建立基於時間的視聽媒體的幾個框架之一。它提供了基於時間的視聽資料的詳細級別上的Objective-C介面。例如,你可以用它來檢查,建立,編輯或重新編碼媒體檔案。您也可以從裝置得到輸入流和在實時捕捉回放過程中操控視訊。圖I-1顯示了iOS上的架構。


Figure I-1  AVFoundation stack on iOS

Figure I-2 shows the corresponding media architecture on OS X.

圖1-2顯示了OS X上相關媒體的架構:


Figure I-2  AVFoundation stack on OS X

You should typically use the highest-level abstraction available that allows you to perform the tasks you want.

  • If you simply want to play movies, use the AVKit framework.

  • On iOS, to record video when you need only minimal control over format, use the UIKit framework(UIImagePickerController)

Note, however, that some of the primitive data structures that you use in AV Foundation—including time-related data structures and opaque objects to carry and describe media data—are declared in the Core Media framework.

通常,您應該使用可用的最高級別的抽象介面,執行所需的任務。

  • 如果你只是想播放電影,使用 AVKit 框架。

  • 在iOS上,當你在格式上只需要最少的控制,使用UIKit框架錄製視訊。(UIImagePickerController).

但是請注意,某些在AV Foundation 中使用的原始資料結構,包括時間相關的資料結構和不透明資料物件的傳遞和描述媒體資料是在Core Media framework宣告的。

At a Glance - 摘要

There are two facets to the AVFoundation framework—APIs related to video and APIs related just to audio. The older audio-related classes provide easy ways to deal with audio. They are described in the Multimedia Programming Guide, not in this document.

You can also configure the audio behavior of your application using AVAudioSession; this is described in Audio Session Programming Guide.

AVFoundation框架包含視訊相關的APIs和音訊相關的APIs。舊的音訊相關類提供了簡便的方法來處理音訊。他們在Multimedia Programming Guide,中介紹,不在這個文件中。

Representing and Using Media with AVFoundation - 用AVFoundation 表示和使用媒體

The primary class that the AV Foundation framework uses to represent media is AVAsset. The design of the framework is largely guided by this representation. Understanding its structure will help you to understand how the framework works. An AVAssetinstance is an aggregated representation of a collection of one or more pieces of media data (audio and video tracks). It provides information about the collection as a whole, such as its title, duration, natural presentation size, and so on. AVAsset is not tied to particular data format. AVAsset is the superclass of other classes used to create asset instances from media at a URL (see Using Assets) and to create new compositions (see Editing).

AV Foundation框架用來表示媒體的主要類是 AVAsset。框架的設計主要是由這種表示引導。瞭解它的結構將有助於您瞭解該框架是如何工作的。一個 AVAsset 例項的媒體資料的一個或更多個(音訊和視訊軌道)的集合的聚集表示。它規定將有關集合的資訊作為一個整體,如它的名稱,時間,自然呈現大小等的資訊。 AVAsset 是不依賴於特定的資料格式。 AVAsset是常常從URL中的媒體建立資產例項的這種類父類(請參閱 Using Assets),並創造新的成分(見 Editing)。

Each of the individual pieces of media data in the asset is of a uniform type and called a track. In a typical simple case, one track represents the audio component, and another represents the video component; in a complex composition, however, there may be multiple overlapping tracks of audio and video. Assets may also have metadata.

Asset中媒體資料的各個部分,每一個都是一個統一的型別,把這個型別稱為“軌道”。在一個典型簡單的情況下,一個軌道代表這個音訊元件,另一個代表視訊元件。然而複雜的組合中,有可能是多個重疊的音訊和視訊軌道。Assets也可能有元資料。

A vital concept in AV Foundation is that initializing an asset or a track does not necessarily mean that it is ready for use. It may require some time to calculate even the duration of an item (an MP3 file, for example, may not contain summary information). Rather than blocking the current thread while a value is being calculated, you ask for values and get an answer back asynchronously through a callback that you define using a block.

AV Foundation 中一個非常重要的概念是:初始化一個 asset 或者一個軌道並不一定意味著它已經準備好可以被使用。這可能需要一些時間來計算一個專案的持續時間(例如一個MP3檔案,其中可能不包含摘要資訊)。而不是當一個值被計算的時候阻塞當前執行緒,你訪問這個值,並且通過呼叫你定義的一個 block 來得到非同步返回。

Playback - 播放

AVFoundation allows you to manage the playback of asset in sophisticated ways. To support this, it separates the presentation state of an asset from the asset itself. This allows you to, for example, play two different segments of the same asset at the same time rendered at different resolutions. The presentation state for an asset is managed by a player item object; the presentation state for each track within an asset is managed by a player item track object. Using the player item and player item tracks you can, for example, set the size at which the visual portion of the item is presented by the player, set the audio mix parameters and video composition settings to be applied during playback, or disable components of the asset during playback.

AVFoundation允許你用一種複雜的方式來管理asset的播放。為了支援這一點,它將一個asset的呈現狀態從asset自身中分離出來。例如允許你在不同的解析度下同時播放同一個asset中的兩個不同的片段。一個asset的呈現狀態是由player item物件管理的。Asset中的每個軌道的呈現狀態是由player item track物件管理的。例如使用player itemplayer item tracks,你可以設定被播放器呈現的專案中可視的那一部分,設定音訊的混合引數以及被應用於播放期間的視訊組合設定,或者播放期間的禁用元件。

You play player items using a player object, and direct the output of a player to the Core Animation layer. You can use a player queue to schedule playback of a collection of player items in sequence.

你可以使用一個 player 物件來播放播放器專案,並且直接輸出一個播放器給核心動畫層。你可以使用一個 player queue(player物件的佇列)去給佇列中player items集合中的播放專案安排序列。

Relevant Chapter: Playback

Reading, Writing, and Reencoding Assets - 讀取,寫入和重新編碼Assets

AVFoundation allows you to create new representations of an asset in several ways. You can simply reencode an existing asset, or—in iOS 4.1 and later—you can perform operations on the contents of an asset and save the result as a new asset.

AVFoundation 允許你用幾種方式建立新的 asset 的表現形式。你可以簡單將已經存在的 asset 重新編碼,或者在iOS4.1以及之後的版本中,你可以在一個 asset 的目錄中執行一些操作並且將結果儲存為一個新的 asset

You use an export session to reencode an existing asset into a format defined by one of a small number of commonly-used presets. If you need more control over the transformation, in iOS 4.1 and later you can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects you can, for example, choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process.

你可以使用 export session 將一個現有的asset重新編碼為一個小數字,這個小數字是常用的預先設定好的一些小數字中的一個。如果在轉換中你需要更多的控制,在iOS4.1已經以後的版本中,你可以使用 asset reader 和 asset writer 物件串聯的一個一個的轉換。例如你可以使用這些物件選擇在輸出的檔案中想要表示的軌道,指定你自己的輸出格式,或者在轉換過程中修改這個asset

To produce a visual representation of the waveform, you use an asset reader to read the audio track of an asset.

為了產生波形的視覺化表示,你可以使用asset reader去讀取asset中的音訊軌道。

Thumbnails - 縮圖

To create thumbnail images of video presentations, you initialize an instance of AVAssetImageGenerator using the asset from which you want to generate thumbnails. AVAssetImageGenerator uses the default enabled video tracks to generate images.

建立視訊演示影象的縮圖,使用想要生成縮圖的asset初始化一個 AVAssetImageGenerator 的例項。AVAssetImageGenerator 使用預設啟用視訊軌道來生成影象。

Editing - 編輯

AVFoundation uses compositions to create new assets from existing pieces of media (typically, one or more video and audio tracks). You use a mutable composition to add and remove tracks, and adjust their temporal orderings. You can also set the relative volumes and ramping of audio tracks; and set the opacity, and opacity ramps, of video tracks. A composition is an assemblage of pieces of media held in memory. When you export a composition using an export session, it’s collapsed to a file.

AVFoundation 使用 compositions 去從現有的媒體片段(通常是一個或多個視訊和音訊軌道)建立新的 assets 。你可以使用一個可變成分去新增和刪除軌道,並調整它們的時間排序。你也可以設定相對音量和增加音訊軌道;並且設定不透明度,渾濁坡道,視訊跟蹤。一種組合物,是一種在記憶體中儲存的介質的組合。當年你使用 export session 匯出一個成份,它會坍塌到一個檔案中。

You can also create an asset from media such as sample buffers or still images using an asset writer.

你也可以從媒體上建立一個asset,比如使用asset writer.的示例緩衝區或靜態影象。

Relevant Chapter: Editing

Still and Video Media Capture - 靜態和視訊媒體捕獲

Recording input from cameras and microphones is managed by a capture session. A capture session coordinates the flow of data from input devices to outputs such as a movie file. You can configure multiple inputs and outputs for a single session, even when the session is running. You send messages to the session to start and stop data flow.

從相機和麥克風記錄輸入是由一個 capture session 管理的。一個 capture session 協調從輸入裝置到輸出的資料流,比如一個電影檔案。你可以為一個單一的 session 配置多個輸入和輸出,甚至 session 正在執行的時候也可以。你將訊息傳送到 session 去啟動和停止資料流。

In addition, you can use an instance of a preview layer to show the user what a camera is recording.

此外,你可以使用 preview layer 的一個例項來向用戶顯示一個相機是正在錄製的。

Concurrent Programming with AVFoundation - AVFoundation併發程式設計

Callbacks from AVFoundation—invocations of blocks, key-value observers, and notification handlers—are not guaranteed to be made on any particular thread or queue. Instead, AVFoundation invokes these handlers on threads or queues on which it performs its internal tasks.

AVFoundation 回撥,比如塊的呼叫、鍵值觀察者以及通知處理程式,都不能保證在任何特定的執行緒或佇列進行。相反,AVFoundation 線上程或者執行其內部任務的佇列上呼叫這些處理程式。

There are two general guidelines as far as notifications and threading:

  • UI related notifications occur on the main thread.
  • Classes or methods that require you create and/or specify a queue will return notifications on that queue.

Beyond those two guidelines (and there are exceptions, which are noted in the reference documentation) you should not assume that a notification will be returned on any specific thread.

下面是兩個有關通知和執行緒的準則

  • 在主執行緒上發生的與使用者介面相關的通知。
  • 需要建立並且/或者 指定一個佇列的類或者方法將返回該佇列的通知。

除了這兩個準則(當然是有一些例外,在參考文件中會被指出),你不應該假設一個通知將在任何特定的執行緒返回。

If you’re writing a multithreaded application, you can use the NSThread method isMainThread or [[NSThread currentThread] isEqual:<#A stored thread reference#>] to test whether the invocation thread is a thread you expect to perform your work on. You can redirect messages to appropriate threads using methods such as performSelectorOnMainThread:withObject:waitUntilDone: and performSelector:onThread:withObject:waitUntilDone:modes:. You could also use dispatch_async to “bounce” to your blocks on an appropriate queue, either the main queue for UI tasks or a queue you have up for concurrent operations. For more about concurrent operations, see Concurrency Programming Guide; for more about blocks, see Blocks Programming Topics. The AVCam-iOS: Using AVFoundation to Capture Images and Movies sample code is considered the primary example for all AVFoundation functionality and can be consulted for examples of thread and queue usage with AVFoundation.

Prerequisites - 預備知識

AVFoundation is an advanced Cocoa framework. To use it effectively, you must have:

  • A solid understanding of fundamental Cocoa development tools and techniques
  • A basic grasp of blocks
  • A basic understanding of key-value coding and key-value observing

AVFoundation 是一種先進的 Cocoa 框架,為了有效的使用,你必須掌握下面的知識:

  • 紮實的瞭解基本的 Cocoa 開發工具和框架
  • 對塊有基本的瞭解
  • 瞭解基本的鍵值編碼(key-value coding)和鍵值觀察(key-value observing

See Also - 參考

There are several AVFoundation examples including two that are key to understanding and implementation Camera capture functionality:

  • AVCam-iOS: Using AVFoundation to Capture Images and Movies is the canonical sample code for implementing any program that uses the camera functionality. It is a complete sample, well documented, and covers the majority of the functionality showing the best practices.
  • AVCamManual: Extending AVCam to Use Manual Capture API is the companion application to AVCam. It implements Camera functionality using the manual camera controls. It is also a complete example, well documented, and should be considered the canonical example for creating camera applications that take advantage of manual controls.
  • RosyWriter is an example that demonstrates real time frame processing and in particular how to apply filters to video content. This is a very common developer requirement and this example covers that functionality.
  • AVLocationPlayer: Using AVFoundation Metadata Reading APIs demonstrates using the metadata APIs.

有幾個 AVFoundation 的例子,包括兩個理解和實現攝像頭捕捉功能的關鍵點:

  • AVCamManual: Extending AVCam to Use Manual Capture API 是AVCam相對應的應用程式。它使用手動相機控制實現相機功能。它也是一個完成的例子,以及記錄,並且應該被視為利用手動控制建立相機應用程式的典型例子。
  • RosyWriter 是一個演示實時幀處理的例子,特別是如果過濾器應用到視訊內容。這是一個非常普遍的開發人員的需求,這個例子涵蓋了這個功能。
  • AVLocationPlayer: 使用 AVFoundation Metadata Reading APIs 演示使用 the metadata APIs.

Using Assets - 使用Assets

Assets can come from a file or from media in the user’s iPod library or Photo library. When you create an asset object all the information that you might want to retrieve for that item is not immediately available. Once you have a movie asset, you can extract still images from it, transcode it to another format, or trim the contents.

Assets 可以來自檔案或者媒體使用者的iPod庫、圖片庫。當你建立一個 asset 物件時,所有你可能想要檢索該專案的資訊不是立即可用的。一旦你有了一個電影 asset ,你可以從裡面提取靜態影象,轉換到另一個格式,或者對內容就行修剪。

Creating an Asset Object - 建立一個Asset物件

To create an asset to represent any resource that you can identify using a URL, you use AVURLAsset. The simplest case is creating an asset from a file:

為了建立一個 asset ,去代表任何你能用一個 URL 識別的資源,你可以使用 AVURLAsset .最簡單的情況是從一個檔案建立一個 asset

NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil];

Options for Initializing an Asset - 初始化一個Asset的選擇

The AVURLAsset initialization methods take as their second argument an options dictionary. The only key used in the dictionary is AVURLAssetPreferPreciseDurationAndTimingKey. The corresponding value is a Boolean (contained in an NSValue object) that indicates whether the asset should be prepared to indicate a precise duration and provide precise random access by time.

AVURLAsset 初始化方法作為它們的第二個引數選項字典。本字典中唯一被使用的 keyAVURLAssetPreferPreciseDurationAndTimingKey. 相應的值是一個布林值(包含在一個 NSValue 物件中),這個布林值指出是否該 asset 應該準備標出一個精確的時間和提供一個以時間為種子的隨機存取。

Getting the exact duration of an asset may require significant processing overhead. Using an approximate duration is typically a cheaper operation and sufficient for playback. Thus:

  • If you only intend to play the asset, either pass nil instead of a dictionary, or pass a dictionary that contains the AVURLAssetPreferPreciseDurationAndTimingKey key and a corresponding value of NO (contained in an NSValue object).
  • If you want to add the asset to a composition (AVMutableComposition), you typically need precise random access. Pass a dictionary that contains theAVURLAssetPreferPreciseDurationAndTimingKey key and a corresponding value of YES (contained in an NSValue object—recall that NSNumberinherits from NSValue):

獲得一個asset的確切持續時間可能需要大量的處理開銷。使用一個近似的持續時間通常是一個更便宜的操作並且對於播放已經足夠了。因此:

  • 如果你只打算播放這個 asset, 要麼傳遞一個 nil 代替 dictionary ,或者傳遞一個字典,這個字典包含 AVURLAssetPreferPreciseDurationAndTimingKeykey和相應 NO(包含在一個 NSValue 物件) 的值。
  • 如果你想要把 asset 新增給一個 composition (AVMutableComposition), 通常你需要精確的隨機存取。傳遞一個字典(這個字典包含 AVURLAssetPreferPreciseDurationAndTimingKey key) 和一個相應的 YES 的值(YES 包含在一個 NSValue 物件中,回憶一下繼承自 NSValueNSNmuber
NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
NSDictionary *options = @{ AVURLAssetPreferPreciseDurationAndTimingKey : @YES };
AVURLAsset *anAssetToUseInAComposition = [[AVURLAsset alloc] initWithURL:url options:options];

Accessing the User’s Assets - 訪問使用者的Assets

To access the assets managed by the iPod library or by the Photos application, you need to get a URL of the asset you want.

  • To access the iPod Library, you create an MPMediaQuery instance to find the item you want, then get its URL using MPMediaItemPropertyAssetURL.For more about the Media Library, see Multimedia Programming Guide.
  • To access the assets managed by the Photos application, you use ALAssetsLibrary.

The following example shows how you can get an asset to represent the first video in the Saved Photos Album.

為了訪問由 iPod 庫或者照片應用程式管理的 assets ,你需要得到你想要 asset 的一個 URL

下面的例子展示瞭如何獲得一個 asset 來儲存照片相簿中的第一個視訊。

ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];

// Enumerate just the photos and videos group by using ALAssetsGroupSavedPhotos.
[library enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos usingBlock:^(ALAssetsGroup *group, BOOL *stop) {

// Within the group enumeration block, filter to enumerate just videos.
[group setAssetsFilter:[ALAssetsFilter allVideos]];

// For this example, we're only interested in the first item.
[group enumerateAssetsAtIndexes:[NSIndexSet indexSetWithIndex:0]
                        options:0
                     usingBlock:^(ALAsset *alAsset, NSUInteger index, BOOL *innerStop) {

                         // The end of the enumeration is signaled by asset == nil.
                         if (alAsset) {
                             ALAssetRepresentation *representation = [alAsset defaultRepresentation];
                             NSURL *url = [representation url];
                             AVAsset *avAsset = [AVURLAsset URLAssetWithURL:url options:nil];
                             // Do something interesting with the AV asset.
                         }
                     }];
                 }
                 failureBlock: ^(NSError *error) {
                     // Typically you should handle an error more gracefully than this.
                     NSLog(@"No groups");
                 }];

Preparing an Asset for Use - 將 Asset 準備好使用

Initializing an asset (or track) does not necessarily mean that all the information that you might want to retrieve for that item is immediately available. It may require some time to calculate even the duration of an item (an MP3 file, for example, may not contain summary information). Rather than blocking the current thread while a value is being calculated, you should use the AVAsynchronousKeyValueLoading protocol to ask for values and get an answer back later through a completion handler you define using a block. (AVAsset and AVAssetTrack conform to the AVAsynchronousKeyValueLoading protocol.)

初始化一個 asset (或者軌道)並不意味著你可能想要檢索該項的所有資訊是立即可用的。這可能需要一些時間來計算一個專案的持續時間(例如一個 MP3 檔案可能不包含摘要資訊)。當一個值被計算的時候不應該阻塞當前執行緒,你應該使用AVAsynchronousKeyValueLoading 協議去請求值,通過完成處理你定義使用的一個 block 後得到答覆。(AVAsset and AVAssetTrack 遵循 AVAsynchronousKeyValueLoading 協議.)

You test whether a value is loaded for a property using statusOfValueForKey:error:. When an asset is first loaded, the value of most or all of its properties is AVKeyValueStatusUnknown. To load a value for one or more properties, you invoke loadValuesAsynchronouslyForKeys:completionHandler:. In the completion handler, you take whatever action is appropriate depending on the property’s status. You should always be prepared for loading to not complete successfully, either because it failed for some reason such as a network-based URL being inaccessible, or because the load was canceled.

NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil];
NSArray *keys = @[@"duration"];

[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() {

    NSError *error = nil;
    AVKeyValueStatus tracksStatus = [asset statusOfValueForKey:@"duration" error:&error];
    switch (tracksStatus) {
        case AVKeyValueStatusLoaded:
            [self updateUserInterfaceForDuration];
            break;
        case AVKeyValueStatusFailed:
            [self reportError:error forAsset:asset];
            break;
        case AVKeyValueStatusCancelled:
            // Do whatever is appropriate for cancelation.
            break;
   }
}];

If you want to prepare an asset for playback, you should load its tracks property. For more about playing assets, see Playback.

如果你想準備一個 asset 去播放,你應該載入它的軌道屬性。更多有關播放 assets,請看 Playback

Getting Still Images From a Video - 從視訊中獲取靜態影象

To get still images such as thumbnails from an asset for playback, you use an AVAssetImageGenerator object. You initialize an image generator with your asset. Initialization may succeed, though, even if the asset possesses no visual tracks at the time of initialization, so if necessary you should test whether the asset has any tracks with the visual characteristic using tracksWithMediaCharacteristic:.

為了從一個準備播放的 asset 中得到靜態影象,比如縮圖,可以使用 AVAssetImageGenerator 物件。用你的 asset 初始化一個影象發生器。不過即使 asset 程序在初始化的時候沒有視覺跟蹤,也可以成功,所以如果有必要,你應該測試一下, asset 是否有軌道有使用 tracksWithMediaCharacteristic 的視覺特徵。

AVAsset anAsset = <#Get an asset#>;
if ([[anAsset tracksWithMediaType:AVMediaTypeVideo] count] > 0) {
    AVAssetImageGenerator *imageGenerator =
        [AVAssetImageGenerator assetImageGeneratorWithAsset:anAsset];
    // Implementation continues...
}

You can configure several aspects of the image generator, for example, you can specify the maximum dimensions for the images it generates and the aperture mode using maximumSize and apertureMode respectively.You can then generate a single image at a given time, or a series of images. You must ensure that you keep a strong reference to the image generator until it has generated all the images.

你可以配置幾個影象發生器的部分,例如,可以指定生成的影象採用最大值,並且光圈的模式分別使用 maximumSizeapertureMode 。然後可以在給定的時間生成一個單獨的影象,或者一系列影象。你必須確定,在生成所有影象之前,必須對影象生成器保持一個強引用。

Generating a Single Image - 生成一個單獨的影象

You use copyCGImageAtTime:actualTime:error: to generate a single image at a specific time. AVFoundation may not be able to produce an image at exactly the time you request, so you can pass as the second argument a pointer to a CMTime that upon return contains the time at which the image was actually generated.

使用 copyCGImageAtTime:actualTime:error: 方法在指定時間生成一個影象。AVFoundation 在你要求的確切時間可能無法產生一個影象,所以你可以將一個指向 CMTime 的指標當做第二個引數穿過去,這個指標返回的時候包含影象被實際生成的時間。

AVAsset *myAsset = <#An asset#>];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:myAsset];

Float64 durationSeconds = CMTimeGetSeconds([myAsset duration]);
CMTime midpoint = CMTimeMakeWithSeconds(durationSeconds/2.0, 600);
NSError *error;
CMTime actualTime;

CGImageRef halfWayImage = [imageGenerator copyCGImageAtTime:midpoint actualTime:&actualTime error:&error];

if (halfWayImage != NULL) {

    NSString *actualTimeString = (NSString *)CMTimeCopyDescription(NULL, actualTime);
    NSString *requestedTimeString = (NSString *)CMTimeCopyDescription(NULL, midpoint);
    NSLog(@"Got halfWayImage: Asked for %@, got %@", requestedTimeString, actualTimeString);

    // Do something interesting with the image.
    CGImageRelease(halfWayImage);
}

Generating a Sequence of Images - 生成一系列影象

To generate a series of images, you send the image generator a generateCGImagesAsynchronouslyForTimes:completionHandler: message. The first argument is an array of NSValue objects, each containing a CMTime structure, specifying the asset times for which you want images to be generated. The second argument is a block that serves as a callback invoked for each image that is generated. The block arguments provide a result constant that tells you whether the image was created successfully or if the operation was canceled, and, as appropriate:

  • The image
  • The time for which you requested the image and the actual time for which the image was generated
  • An error object that describes the reason generation failed

In your implementation of the block, check the result constant to determine whether the image was created. In addition, ensure that you keep a strong reference to the image generator until it has finished creating the images.

生成一系列影象,可以給影象生成器傳送 generateCGImagesAsynchronouslyForTimes:completionHandler: 訊息。第一個引數是一個 NSValue 物件的陣列,每個都包含一個 CMTime 結構體,指定了影象想要被生成的 asset 時間。block 引數提供了一個結果,這個結果包含了告訴你是否影象被成功生成,或者操作某些情況下被取消。結果:

  • 影象
  • 你要求的影象和影象生成的實際時間
  • 一個 error 物件,描述了生成失敗的原因

block 的實現中,檢查結果常數,來確定影象是否被建立。此外,在完成建立影象之前,確保保持一個強引用給影象生成器。

AVAsset *myAsset = <#An asset#>];
// Assume: @property (strong) AVAssetImageGenerator *imageGenerator;
self.imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:myAsset];

Float64 durationSeconds = CMTimeGetSeconds([myAsset duration]);
CMTime firstThird = CMTimeMakeWithSeconds(durationSeconds/3.0, 600);
CMTime secondThird = CMTimeMakeWithSeconds(durationSeconds*2.0/3.0, 600);
CMTime end = CMTimeMakeWithSeconds(durationSeconds, 600);
NSArray *times = @[NSValue valueWithCMTime:kCMTimeZero],
                  [NSValue valueWithCMTime:firstThird], [NSValue valueWithCMTime:secondThird],
                  [NSValue valueWithCMTime:end]];

[imageGenerator generateCGImagesAsynchronouslyForTimes:times
                completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime actualTime,
                                    AVAssetImageGeneratorResult result, NSError *error) {

                NSString *requestedTimeString = (NSString *)
                    CFBridgingRelease(CMTimeCopyDescription(NULL, requestedTime));
                NSString *actualTimeString = (NSString *)
                    CFBridgingRelease(CMTimeCopyDescription(NULL, actualTime));
                NSLog(@"Requested: %@; actual %@", requestedTimeString, actualTimeString);

                if (result == AVAssetImageGeneratorSucceeded) {
                    // Do something interesting with the image.
                }

                if (result == AVAssetImageGeneratorFailed) {
                    NSLog(@"Failed with error: %@", [error localizedDescription]);
                }
                if (result == AVAssetImageGeneratorCancelled) {
                    NSLog(@"Canceled");
                }
  }];

You can cancel the generation of the image sequence by sending the image generator a cancelAllCGImageGeneration message.

Trimming and Transcoding a Movie - 微調和轉化為一個電影

You can transcode a movie from one format to another, and trim a movie, using an AVAssetExportSession object. The workflow is shown in Figure 1-1. An export session is a controller object that manages asynchronous export of an asset. You initialize the session using the asset you want to export and the name of a export preset that indicates the export options you want to apply (see allExportPresets). You then configure the export session to specify the output URL and file type, and optionally other settings such as the metadata and whether the output should be optimized for network use.

asset一律使用“資產”程式碼,切換還要加“略麻煩

你可以使用 AVAssetExportSession 物件,將一個電影的編碼進行轉換,並且對電影進行微調。工作流程如圖1-1所示。一個 export session 是一個控制器物件,管理一個資產的非同步匯出。使用想要匯出的資產初始化一個 session 和輸出設定的名稱,這個輸出設定表明你想申請的匯出選項(allExportPresets)。然後配置匯出會話去指定輸出的 URL 和檔案型別,以及其他可選的設定,比如元資料,是否將輸出優化用於網路使用。


Figure 1-1  The export session workflow

You can check whether you can export a given asset using a given preset using exportPresetsCompatibleWithAsset: as illustrated in this example:

AVAsset *anAsset = <#Get an asset#>;
NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:anAsset];
if ([compatiblePresets containsObject:AVAssetExportPresetLowQuality]) {
    AVAssetExportSession *exportSession = [[AVAssetExportSession alloc]
        initWithAsset:anAsset presetName:AVAssetExportPresetLowQuality];
    // Implementation continues.
}

You complete the configuration of the session by providing the output URL (The URL must be a file URL.) AVAssetExportSession can infer the output file type from the URL’s path extension; typically, however, you set it directly using outputFileType. You can also specify additional properties such as the time range, a limit for the output file length, whether the exported file should be optimized for network use, and a video composition. The following example illustrates how to use the timeRange property to trim the movie:

完成會話的配置,是由輸出的 URL (URL 必須是檔案的 URL)控制的。AVAssetExportSession可以從 URL 的路徑延伸推斷輸出檔案的型別。然而通常情況下,直接使用 outputFileType 設定。還可以指定附加屬性,如時間範圍、輸出檔案長度的限制、匯出的檔案是否應該為了網路使用而優化、還有一個視訊的構成。下面的示例展示瞭如果使用 timeRange 屬性修剪電影。

    exportSession.outputURL = <#A file URL#>;
    exportSession.outputFileType = AVFileTypeQuickTimeMovie;

    CMTime start = CMTimeMakeWithSeconds(1.0, 600);
    CMTime duration = CMTimeMakeWithSeconds(3.0, 600);
    CMTimeRange range = CMTimeRangeMake(start, duration);
    exportSession.timeRange = range;

To create the new file, you invoke exportAsynchronouslyWithCompletionHandler:. The completion handler block is called when the export operation finishes; in your implementation of the handler, you should check the session’s status value to determine whether the export was successful, failed, or was canceled:

    [exportSession exportAsynchronouslyWithCompletionHandler:^{

        switch ([exportSession status]) {
            case AVAssetExportSessionStatusFailed:
                NSLog(@"Export failed: %@", [[exportSession error] localizedDescription]);
                break;
            case AVAssetExportSessionStatusCancelled:
                NSLog(@"Export canceled");
                break;
            default:
                break;
        }
    }];

You can cancel the export by sending the session a cancelExport message.

The export will fail if you try to overwrite an existing file, or write a file outside of the application’s sandbox. It may also fail if:

  • There is an incoming phone call
  • Your application is in the background and another application starts playback

In these situations, you should typically inform the user that the export failed, then allow the user to restart the export.

你可以通過給會話傳送一個 cancelExport 訊息來取消匯出。

如果你嘗試覆蓋一個現有的檔案或者在應用程式的沙盒外部寫一個檔案,都將會是匯出失敗。如果發生下面情況也可能失敗:

  • 有一個來電
  • 你的應用程式在後臺並且另一個程式開始播放

在這種情況下,你通常應該通知使用者匯出失敗,然後允許使用者重新啟動匯出。

Playback - 播放

To control the playback of assets, you use an AVPlayer object. During playback, you can use an AVPlayerItem instance to manage the presentation state of an asset as a whole, and an AVPlayerItemTrack object to manage the presentation state of an individual track. To display video, you use an AVPlayerLayer object.

使用 AVPlayer 物件控制資產的播放。在播放期間,可以使用一個 AVPlayerItem 例項去管理資產作為一個整體的顯示狀態,AVPlayerItemTrack 物件來管理一個單獨軌道的顯示狀態。使用 AVPlayerLayer 顯示視訊。

Playing Assets - 播放資產

A player is a controller object that you use to manage playback of an asset, for example starting and stopping playback, and seeking to a particular time. You use an instance of AVPlayer to play a single asset. You can use an AVQueuePlayer object to play a number of items in sequence (AVQueuePlayer is a subclass of AVPlayer). On OS X you have the option of the using the AVKit framework’s AVPlayerView class to play the content back within a view.

播放器是一個控制器物件,使用這個控制器物件去管理一個資產的播放,例如開始和停止播放,並且追蹤一個特定的時間。使用 AVPlayer 的例項去播放單個資產。可以使用 AVQueuePlayer 物件去播放在一些在佇列的專案(AVQueuePlayerAVPlayer 的子類)。在 OS X 系統中,可以選擇使用 AVKit 框架的 AVPlayerView 類去播放一個檢視的內容。

A player provides you with information about the state of the playback so, if you need to, you can synchronize your user interface with the player’s state. You typically direct the output of a player to a specialized Core Animation layer (an instance of AVPlayerLayer or AVSynchronizedLayer). To learn more about layers, see Core Animation Programming Guide.

Multiple player layers: You can create many AVPlayerLayer objects from a single AVPlayer instance, but only the most recently created such layer will display any video content onscreen.

多個播放器層:可以從一個單獨的 AVPlayer 例項建立許多 AVPlayerLayer 物件,但是隻有最近被建立的那一層將會螢幕上顯示視訊的內容。

Although ultimately you want to play an asset, you don’t provide assets directly to an AVPlayer object. Instead, you provide an instance of AVPlayerItem. A player item manages the presentation state of an asset with which it is associated. A player item contains player item tracks—instances of AVPlayerItemTrack—that correspond to the tracks in the asset. The relationship between the various objects is shown in Figure 2-1.

雖然最終想要播放一個資產,但又沒有直接給提供資產一個 AVPlayer 物件。相反,提供一個 AVPlayerItem 的例項。一個 player item 管理與它相關的資產的顯示狀態。一個player item包含了播放器專案軌道 – AVPlayerItemTrack—that 的例項,對應資產內的軌道。各個物件之間的關係如圖2-1所示。


Figure 2-1  Playing an asset

This abstraction means that you can play a given asset using different players simultaneously, but rendered in different ways by each player. Figure 2-2 shows one possibility, with two different players playing the same asset, with different settings. Using the item tracks, you can, for example, disable a particular track during playback (for example, you might not want to play the sound component).

這個摘要意味著可以同時使用不同的播放器播放一個給定的資產,但每個播放器都以不同的方式呈現。圖2-2顯示了一種可能性,同一個資產有兩個不同的播放器,並且有不同的設定。可以使用不同的專案軌道,在播放期間禁用一個特定的軌道(例如,你可能不想播放這個聲音元件)。


Figure 2-2  Playing the same asset in different ways

You can initialize a player item with an existing asset, or you can initialize a player item directly from a URL so that you can play a resource at a particular location (AVPlayerItem will then create and configure an asset for the resource). As with AVAsset, though, simply initializing a player item doesn’t necessarily mean it’s ready for immediate playback. You can observe (using key-value observing) an item’s status property to determine if and when it’s ready to play.

可以用現有的資產初始化一個播放器專案,或者可以直接從一個 URL 初始化播放器專案,為了可以在一個特定位置播放一個資源(AVPlayerItem 將為資源建立和配置資產)。即使帶著 AVAsset 簡單地初始化一個播放器專案並不一定意味著它已經準備可以立即播放了。可以觀察(使用 key-value observing])一個專案的 status 屬性,以確定是否可以播放並且當已經準備好去播放。

Handling Different Types of Asset - 處理不同型別的資產

The way you configure an asset for playback may depend on the sort of asset you want to play. Broadly speaking, there are two main types: file-based assets, to which you have random access (such as from a local file, the camera roll, or the Media Library), and stream-based assets (HTTP Live Streaming format).

配置一個準備播放的資產的方法可能取決於你想播放的資產的順序。概括地說,主要由兩種型別:基於檔案的資產,可以隨機訪問(例如從一個本地檔案,相機膠捲,或者媒體庫),和基於流的資產(HTTP直播流媒體格式)。

To load and play a file-based asset. There are several steps to playing a file-based asset:

  • Create an asset using AVURLAsset.
  • Create an instance of AVPlayerItem using the asset.
  • Associate the item with an instance of AVPlayer.
  • Wait until the item’s status property indicates that it’s ready to play (typically you use key-value observing to receive a notification when the status changes).

This approach is illustrated in Putting It All Together: Playing a Video File Using AVPlayerLayer.