1. 程式人生 > >Generative Adversarial Networks and the Rise of Fake Faces: an Intellectual Property Perspective

Generative Adversarial Networks and the Rise of Fake Faces: an Intellectual Property Perspective

The tremendous growth in the artificial intelligence (AI) sector over the last several years may be attributed in large part to the proliferation of so-called big data.  But even today, data sets of sufficient size and quality are not always available for certain applications.  That’s where a technology called generative adversarial networks (GANs) comes in.  GANs, which are neural networks comprising two separate networks (a generator and a discriminator network that face off against each another), are useful for creating new (“synthetic” or “fake”) data samples.  As a result, one of the

hottest areas for AI research today involves GANs, their ever-growing use cases, and the tools to identify their fake samples in the wild.  Face image-generating GANs, in particular, have received much of the attention due to their ability to generate highly realistic faces.

One of the notable features of face image-generating GANs is their ability to generate synthetic faces having particular attributes, such as desired eye and hair color, skin tone, gender, and a certain degree of “attractiveness,” among others, that by appearance are nearly indistinguishable from reality.  These fake designer face images can be combined (using feature vectors) to produce even more highly sculpted face images having custom genetic features.  A similar process using celebrity images can be used to generate fake images well-suited to targeted online or print advertisements and other purposes.  Imagine the face of someone selling you a product or service whose persona, which is customized to match your particular likes/dislikes (after all, market researchers know all about you), and which has a vague resemblance to a favorite athlete, historical figure, or celebrity.  Even though family, friends, and celebrity endorsements are seen as the best way for companies looking for high marketing conversion rates, a highly tailored GAN-generated face may one day rival those techniques.

As previously discussed on this website, AI technologies involving any use of human face data, such as face detection, facial recognition, face swapping, deep fakes, and now synthetic face generation technologies, raise a number of legal (and ethical) issues.  Facial recognition (a type of regulated biometric information in some states), for example, has become a lightning rod for privacy-related laws and lawsuits.  Proponents of face image-generating GANs seem to recognize potential legal risk posed by their technology when they argue that generating synthetic faces avoids copyright restrictions (this at least implicitly acknowledges that data sets found online may contain copyrighted images scraped from the Internet).  But copyright issue may not be so clear-cut in the case of GANs.  And even if copyrights are avoided, a GAN developer may face other potential legal issues, such as those involving publicity and privacy rights.

Consider the following hypothetical: GAN Developer’s face image-generating model is used to create a synthetic persona with combined features from at least two well-known public figures: Celebrity and Athlete, who own their respective publicity rights, i.e., the right to control the use of their names and likenesses, which they control through their publicity, management, legal, and/or agency teams.  Advert Co. acquires the synthetic face image sample and uses it in a national print advertising campaign that appears in leading fitness, adventure, and style magazines.  All of the real celebrity, athlete, and other images used in GAN Developer’s discriminator network are the property of Image Co.  GAN Developer did not obtain permission to use Image Co.’s images, but it also did not retain the images after its model was fully developed and used to create the synthetic face image sample.

Image Co., which asserts that it owns the exclusive right to copy, reproduce, and distribute the original real images and to make derivatives thereof, sues GAN Developer and Advert Co. for copyright infringement.

As a possible defense, GAN Developer might argue that its temporary use of the original copyrighted images, which were not retained after their use, was a “fair use,” and both GAN Developer and Advert Co. might further argue that the synthetic face image is an entirely new work, it is a transformative use of the original images, and it is not a derivative of the originals.

With regard to their fair use argument, the Copyright Act provides a non-exhaustive list of factors to consider in deciding whether the use of a copyrighted work was an excusable fair use: “(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and (4) the effect of the use upon the potential market for or value of the copyrighted work.”  17 USC § 107.  Some of the many thoroughly-reasoned and well-cited court opinions concerning the fair use doctrine address its applicability to face images.  In just one example, a court granted summary judgment in favor of a defendant after finding that the defendant’s extracted outline features of a face from an online copyrighted photo of a mayor for use in opposition political ads was an excusable fair use.  Kienitz v. Sconnie Nation LLC, 766 F. 3d 756 (7th Cir. 2014).  Even so, no court has considered the specific fact pattern set forth in the above hypothetical involving GANs, so it remains to be seen how a court might apply the fair use doctrine in such circumstances.

As for the other defenses, a derivative work is a work based on or derived from one or more already existing works.  Copyright Office Circular 14 at 1 (2013).  A derivative work incorporates some or all of a preexisting work and adds new original copyrightable authorship to that work.  A derivative works is one that generally involves transformation of the content of the preexisting work into an altered form, such as the translation of a novel into another language, the adaptation of a novel into a movie or play, the recasting of a novel as an e-book or an audiobook, or a t-shirt version of a print image.  See Authors Guild v. Google, Inc., 804 F. 3d 202, 215 (2nd Cir. 2015).  In the present hypothetical, a court might consider whether GAN Developer’s synthetic image sample is an altered form of Image Co.’s original Celebrity and Athlete images.

With regard to the transformative use test, something is sufficiently transformative if it “adds something new, with a further purpose or different character, altering the first with new expression, meaning or message….” Campbell v. Acuff-Rose Music, Inc., 510 US 569, 579 (1994) (citing Leval, 103 Harv. L. Rev. at 1111). “[T]he more transformative the new work,” the more likely it may be viewed as a fair use of the original work. See id.  Thus, a court might consider whether GAN Developer’s synthetic image “is one that serves a new and different function from the original work and is not a substitute for it.”  Authors Guild, Inc. v. HathiTrust, 755 F. 3d 87, 96 (2nd Cir. 2014).  Depending on the “closeness” of the synthetic face to Celebrity’s and Athlete’s, whose features were used to design the synthetic face, a court might find that the new face is not a substitute for the originals, at least from a commercial perspective, and therefore it is sufficiently transformative.  Again, no court has considered the hypothetical GAN fact pattern, so it remains to be seen how a court might apply the transformative use test in such circumstances.

Even if GAN Developer and Advert Co. successfully navigate around the copyright infringement issues, they may not be entirely out of the liability woods.  Getting back to the hypothetical, they still may face one or both of the Celebrity’s and Athlete’s misappropriation of publicity rights claims.  Publicity rights often arise in connection with the use of a person’s name or likeness for advertising purposes.  New York courts, which have a long history of dealing with publicity rights issues, have found that “a name, portrait, or picture is used ‘for advertising purposes’ if it appears in a publication which, taken in its entirety, was distributed for use in, or as part of, an advertisement or solicitation for patronage of a particular product or service.” See Scott v. WorldStarHipHop, Inc., No. 10-cv-9538 (S.D.N.Y. 2012) (citing cases).

Right of publicity laws in some states cover not only a person’s persona, but extend to the unauthorized use and exploitation of that person’s voice, sound-alike voice, signature, nicknames, first name, roles or characterizations performed by that person (i.e., celebrity roles), personal catchphrases, identity, and objects closely related to or associated with the persona (i.e., celebrities associated with particular goods).  See Midler v. Ford Motor Co., 849 F.2d 460 (9th Cir. 1989) (finding advertiser liable for using sound-alike performers to approximate the vocal sound of actor Bette Midler); Waits v. Frito-Lay, Inc., 978 F.2d 1093 (9th Cir. 1992) (similar facts); Onassis v. Christian Dior, 122 Misc. 2d 603 (NY Supreme Ct. 1984) (finding advertiser liable for impermissibly misappropriating Jacqueline Kennedy Onassis’ identity for the purposes of trade and advertising where picture used to establish that identity was that of look-alike model Barbara Reynolds); White v. Samsung Electronics Am., Inc., 971 F.2d 1395 (9th Cir. 1992) (finding liability where defendant employed a robot that looked and replicated actions of Vanna White of “Wheel of Fortune” fame); Carson v. Here’s Johnny Portable Toilets, 698 F.2d 831 (6th Cir. 1983) (finding defendant liable where its advertisement associated its products with well-known “Here’s Johnny” introduction of television personality Johnny Carson); Motschenbacher v. R.J. Reynolds Tobacco Co., 498 F.2d 921 (9th Cir. 1974) (finding defendant liable where its advertisement used a distinctive phrase and race car in advertisements, and where public could unequivocally relate the phrase and the car to the famous individuals associated with the race car).  Some court’s, however, have drawn the line in the case of fictional names, even if it is closely related to a real name.  See Duncan v. Universal Music Group et al., No. 11-cv-5654 (E.D.N.Y. 2012).

Thus, Advert Co. might argue that it did not misappropriate Celebrity’s and Athlete’s publicity rights for its own advantage because neither of their likenesses is generally apparent in the synthetic image.  Celebrity or Athlete might counter with evidence demonstrating the image contains the presence of sufficient genetic features, such as eye shape, that might make an observer think of them.  As some of the cases above suggest, a direct use of a name or likeness is not necessary for finding misappropriation of another’s persona. On the other hand, the burden of proof increases when identity is based on indirect means, such as through voice, association with objects, or in the case of a synthetic face, a mere resemblance.

A court might also hear additional arguments against misappropriation. Similar to the transformative use test under a fair use query, Advert Co. might argue that its synthetic image adds significant creative elements such that the original images were transformed into something more than a mere likeness or imitation, or that its use of other’s likenesses was merely incidental (5 J. Thomas McCarthy, McCarthy on Trademarks and Unfair Competition § 28:7.50 (4th ed. 2014) (“The mere trivial or fleeting use of a person’s name or image in an advertisement will not trigger liability when such a usage will have only a de minimis commercial implication.”)). Other arguments that might be raised include First Amendment and perhaps a novel argument that output from a GAN model cannot constitute misappropriate because, at its core, the model simply learns for itself what features of an image’s pixel values are most useful for the purpose of characterizing images of human faces and thus neither the model nor GAN Developer had awareness of a real person’s physical features when generating a fake face.  But see In Re Facebook Biometric Information Privacy Litigation, slip op. (Dkt. 302), No. 3:15-cv-03747-JD (N.D. Cal. May 14, 2018) (finding unpersuasive a “learning” by artificial intelligence argument in the context of facial recognition) (more on this case here).

This post barely touches the surface of some of the legal issues and types of evidence that might arise in a situation like the above GAN hypothetical.  One can imagine all sorts of other possible scenarios involving synthetic face images and their potential legal risks that GAN developers and others might confront.

For more information about one online image data set, visit ImageNet; for an overview of GANs, see these slides (by GANs innovator Ian Goodfellow and others), this tutorial video (at 51:00 mark), and this ICLR 2018 conference paper by NVIDIA.