1. 程式人生 > >Working With Video in iOS: AVFoundation and CoreMedia

Working With Video in iOS: AVFoundation and CoreMedia

Step 3: Conform to UIImagePickerControllerDelegate

If you tap the record button, you will see that the expected default recording behaviour is available to us, however we haven’t specified how we want to save the video yet. The action that will trigger our save is when the user taps ‘Use Video’ after the recording has been captured. This causes the picker to access it’s delegate and call the method imagePickerController(picker: didFinishPickingMediaWithInfo:)

. So now let’s adopt and conform to the UIImagePickerControllerDelegate.

First, notice that UIImagePickerControllerDelegate conforms to NSObject, and so we need our VideoService to do the same (line 1). This means that our initializer is now an override and needs to be marked as such (line 4). The compiler also informs us that we need to adopt UINavigationControllerDelegate

protocolwhich contains several optional methods that we won’t require in our demo.

Once this is done, we need to set our VideoService as the picker delegate. We can do this in the method setupVideoRecordingPicker() from the previous snippet and set picker.delegate = self.

To check that this works, we can add a print statement to the delegate method (line 11)

, and build and run our app once again.

Step 4: Save Video to Photos Album

Once we have confirmed that our delegate method is being called successfully, we can go about saving the captured video to the Photos application.

Let’s start by getting the video url in the didFinishPickingMediaWithInfo delegate method (lines 18–21). When the picker selects some media (image or video) the method is called, and an Info dictionary is passed in. You can access all kinds of useful information about your selected media by calling one of the InfoKey caseswhich is simply an enum held by UIImagePickerController. In our example we want to access the URL to the video we captured, so we subscript using the appropriate key and cast the value (which is stored as Any) to our desired URL type (line 19).

Once we have this url, we can create a new private method saveVideo(at mediaUrl:) (lines 3–9) and pass in the url. Again we need to seek permission from the user to store media in their library, so we return to our info.plist and create a new key Photo Library Additions Usage Description with an accompanying string which will be presented to the user. With that done, saving is made incredibly simple with two static methods found in UIKit, the first returns a boolean to indicate whether the video is compatible with saving to a photo album, and then the second performs the save and calls a #selector method to indicate whether the save was successful. This #selector method has been created and will be used in the next step (lines 11–13).

Step 5: Pass Video URL with Delegate Pattern

Once we have saved the video we want to be able to pass its url to our view controller. To do this, we will use the delegate pattern. First, we create a protocol VideoServiceDelegate and create a method videoDidFinishSaving(error: url:) (lines 1–3). This method will pass an optional error if the save was unsuccessful, and then an optional url if the save was successful. We create a delegate property of our VideoServiceDelegate? and use that property to pass the error and url in the #selector method we created in the previous snippet (lines 16–19).

Now in the view controller, we can adopt VideoServiceDelegate, implement the protocol method and set VideoService’s delegate property using VideoService.instance.delegate = self. In the snippet below, we are confirming that the save was successful and showing an alert informing the user. If we build and run, we should find that our save is successful! Now we can also open our Photos application and see the newly created movie in ‘All Photos’.

Playing Back Video with VideoPlayerView

In the past, video playback was achieved with an MPMoviePlayerController which provided some handy default functionality in a few lines of setup code. It was deprecated in iOS 9 and replaced by AVFoundation’s AVPlayer which requires more in terms of setup, but allows for greater customization of your player.

In the next section, we will create a custom VideoPlayerView, which is a subclass of UIView and contains all of the UI components for interacting with the video. In the screenshot below you can see that there are two labels and a slider at the bottom of the view to show the current position and overall duration of the video. This article will not address how these components are created and added to their superview, but will instead focus on setting up the AVPlayer and handling the tracking of video progress using AVFoundation and CoreMedia.

Step 1: Create AVPlayer and AVPlayerLayer

In order to show a video, we first need to import AVFoundation and then setup two properties AVPlayer and AVPlayerLayer. Notice that we want to initialize our custom class with a frame and videoURLString (line 6) which we pass into a method setupVideoPlayerWith(path:). Here we create the AVPlayer; the playback engine which handles playing, pausing, and tracking progress. This can be done on a single line with the video url (line 19). Once we have our player setup we need to create the player layer. All subclasses of UIView have layers and typically we interact with them to create borders, radius and even layer animations. In this case the layer acts as our player screen. We need to initialize our layer with the player (line 20) and then set the frame of our layer and add it as a sublayer of our class (lines 22–23). With all of this setup done, we can begin playback with the play() method (line 14).

Now, we can test it out by taking the url from our VideoServiceDelegate method and passing it into a method playMovie(with url:) (lines 15–19) that initializes the VideoPlayerView with the video url and adds it as a subview of our ViewController view. When we build and run, we should see that the video starts playing on its own.

Step 2: Tap Gesture Controls Play and Pause

Let’s now control play and pause by adding a tap gesture to our VideoPlayerView. We initialize the tap gesture and add it to our custom class in one line (line 9) using handleTapGesture(sender:) as our #selector method. This method checks a boolean isSettingPlay (line 3) to determine whether play should be started or stopped and then changes the value of that same property (line 18). Build and run again to check that the behaviour is as we would expect.

Step 3: Track Video Progress with Labels and Slider

So far we have been able to successfully play and pause our video, but our time indication labels and slider haven’t changed at all. In our VideoPlayerView we create a new method trackVideoProgress() in order to update our current time label and slider (lines 25–33). We begin by creating an instance of CMTime (line 26). CM is the prefix for CoreMedia, which is the framework responsible for handling sample processing, synchronization, queues, and time representation in AVFoundation. CMTime is initialized with a value and timescale, and can be converted into many different time measurement units. Seconds is always value / timescale. With this in mind, we can set our desired time interval of 0.5 seconds. Now we observe our AVPlayer instance and get a CMTime object with the current time at the frequency of our interval using addPeriodicTimeObserver(forInterval: queue: , using: ). We update the label and the slider within the completion handler.

In order to get the total length of our video, we can override an NSObject instance method observeValue(forKeyPath:, of object: , change:, context: ) (lines 25–41). The AVPlayer calls this method once it has loaded the video from the URL and before it begins playback. In this method we can guard that the loadedTimeRanges are available (line 37). Provided they are, we will be able to access the duration of our video (line 39).

To properly display the time in our labels, we can create an extension on our String class that accepts a CMTime argument and converts it to a String showing minutes and seconds in our desired format: 00:00 (lines 1-10). We can convert the CMTime type to a Float64 representation of seconds using CMTimeGetSeconds(_ time:). Then it is a simple matter of getting the number of minutes by dividing by 60 and the remaining seconds by using the modulo operator % (lines 5–6). This method is called to update label text in both of the previous methods on (line 28 and line 40).

Finally, we can update the slider in our trackVideoProgress method (line 30) using setSliderValue(for player:, progress:) (lines 14-19). In it, we access the duration of the video from the AVPlayer and the current progress and then divide the progress by the duration. The slider needs to be set up with minimum and maximum values of 0.0 and 1.0. Once again we can build and run.

Step 4: Scrub to Different Points in Video Using Slider

We have the playback of the video now driving the movement of the slider, but we can also do the opposite and have the user move the slider to determine the progress of the video. In order to do this we need to add a #selector method to our slider with slider.addTarget(self, action: #selector(handleSliderChangedValue(sender:)), for: .valueChanged). In this method we get the duration (CMTime) of our video from the player (line 4), and convert it to seconds with CMTimeGetSeconds (which you will remember is type Float64). We then calculate the seek position in terms of number of seconds by multiplying the slider value with the total seconds. Convert that back into a CMTime object using a timescale 1 for seconds, and then pass that into the AVPlayer method seek(to:, completionHandler:).

Wrap Up

And that’s it! We have managed to implement a whole lot of functionality into our two custom classes and provided the user with a range of typical features when working with media. An important thing to remember is that many of the concepts are transferrable. For example UIImagePickerController is able to work with images and AVFoundation is a powerful framework for working with audio, and with little adjustment you could create a totally different user experience with a totally different media type. Please share this article and build something cool. Thanks for reading!