The Generative AI Revolution: Key Legal Considerations for the Media & Entertainment Industry
For better or worse, generative artificial intelligence (GenAI) is already transforming the way we live and work. Within two months of its initial release to the public, ChatGPT reached 100 million monthly active users, making it the fastest-growing consumer application in history. Other popular generative AI tools such as Github Copilot, DALL-E, HarmonAI, and Runway offer powerful tools that can generate computer code, images, songs, and videos, respectively, with limited human involvement. The implications are immense and have already sparked calls for new federal regulatory agencies, a pause on AI development, and even concerns about extinction.
Few industries have been affected by GenAI as profoundly and rapidly as the Media & Entertainment industry. Already, GenAI has emerged as a major flashpoint in Hollywood labor negotiations; the Recording Industry Association has had to wrestle with the eligibility of AI-generated songs for Grammy Awards; media organizations are examining whether journalistic standards can accommodate AI-generated news; and a growing list of artists and creators are stepping up to challenge key assumptions in the AI developer community. How the courts, lawmakers, regulators, and industry members respond to these challenges could shape the industry for decades to come.
Below, we outline key legal issues industry members should keep in mind.
1. Hollywood Labor Strife
When Hollywood writers went on strike earlier this year, one issue in particular generated headlines: a demand by the union to regulate the use of generative artificial intelligence on union projects, including prohibiting GenAI from writing or re-writing literary material, prohibiting its use as source material, and prohibiting the use of union content to train AI large language models. The Screen Actors’ Guild followed suit shortly thereafter, with SAG-AFTRA President Fran Drescher claiming that AI poses a threat to creative professions and that actors and performers need contractual protections against exploitation.
For their part, the Director’s Guild of America (DGA) recently ratified a new agreement with studios that includes new guidance on the use of GenAI. Among other things, the language states that an “Employer’s decision to utilize [GenAI] in connection with creative elements will be subject to consultation between the Employer and the employee.” It also affirms that “duties performed by DGA members must be assigned to a person and [GenAI] does not constitute a person.” This provision aims to ensure that the responsibilities and creative tasks undertaken by DGA members cannot be outsourced to AI tools or systems, thus safeguarding their roles and contributions in the industry.
Despite criticism from some in the labor movement that the DGA language is insufficient to adequately protect workers’ rights, it may well become a model for future labor negotiations. The language could also be critical for studios and labels, allowing them to utilize these revolutionary new technologies to, for example, create new content, improve post-production, or even enable a film or series to proceed after major production disruptions.
2. Use of GenAI in the Creative Process
The US Copyright Office and courts have made it clear that content produced solely by GenAI is not protectable by federal copyright laws because of the absence of human authorship. But what happens when GenAI is merely one part of the creative process? The Copyright Office has so far opened the door to copyright protection for works created by GenAI but modified by humans. It has been less receptive to arguments that human creativity is involved in even the most substantial GenAI prompt-writing.
For example, in the context of human-modified works, the Office has advised that it “will register works that contain otherwise unprotectable material that has been edited, modified, or otherwise revised by a human author, but only if the new work contains a ‘sufficient amount of original authorship’ to itself qualify for copyright protection.” Assessing whether a work created by GenAI and modified by a human has a sufficient amount of original authorship to qualify for copyright protection will necessarily require a case-by-case analysis and likely detailed documentation of the respective human and GenAI contributions.
On the other hand, the Office has suggested that virtually no amount of human prompting can give rise to a protectable AI-created work. For example, in the “Zarya of the Dawn” case, the short comic book’s human author described a creative process of trial-and-error, in which she provided “hundreds or thousands of descriptive prompts” for the book’s illustrations to the GenAI tool until she was satisfied with the final product. The Office nonetheless denied copyright protection for the disputed images, concluding that the AI tool rather than the author had originated the “traditional elements of authorship” in the images. Whether this reluctance to acknowledge the human role in GenAI prompt writing can stand the test of time remains to be seen, but pending further guidance, copyright protection is much more likely in circumstances where a human modifies GenAI output rather than tries to direct the output through prompt writing.
3. Duty to Disclose in Copyright Applications
Applicants filing for federal copyright applications are required to disclose if the subject work includes content created by GenAI. Failure to do so can result in revocation of the copyright registration and, critically, loss of access to federal court and statutory damages. In an industry where copyrights are king, companies shouldn’t leave anything to chance, particularly when it comes to hiring creative agencies, production studios, or independent artists. Whenever commissioning valuable copyrightable works, companies should consider requiring vendors to disclose whether they use GenAI as part of their creative process and document what, if any, GenAI-created materials are incorporated into the final work product. Complying may be extremely difficult since AI is commonly used for a variety of editing, special effects, and post-production work, and identifying those elements may be next to impossible.
4. Challenging the Fair Use Assumption
A growing list of artists and creators, including comedian Sarah Silverman and the author Michael Chabon, have filed lawsuits against GenAI developers, alleging that the developers have used their works without authorization to train datasets and produce infringing works. In the Silverman case, the plaintiffs claim that these AI tools summarized their written works without permission, essentially reproducing their content for free. The lawsuit emphasizes the potential loss of control over creative works and the ability to profit from them, arguing that AI-generated summaries and reproductions can undermine authors’ rights to control the use and sharing of their content.
These and other lawsuits are likely to run up against the assumption that using third-party works to train large language models is a “transformative” use and therefore non-infringing under copyright law; an assumption that dates back to early-internet cases involving web search indexing and similar conduct. The outcomes of these cases could fundamentally alter the relationship between content-creators and GenAI developers, potentially requiring AI developers to re-train models with licensed or open-sourced content and creating new revenue opportunities for creators.
5. Publicity Rights
Publicity rights pose another challenge to the use of GenAI tools, as demonstrated by a recent class action lawsuit filed by “Big Brother” contestant, Kyland Young, against NeoCortext, Inc. Citing the California Right of Publicity Statute, the lawsuit accuses the company of commercially exploiting the names, voices, photographs, or likenesses of actors, musicians, athletes, and celebrities to sell paid subscriptions to its deep-fake app, Reface, without permission. Though not always effective, many GenAI tools are programmed to avoid directly reproducing copyrighted works, such as third-party photos or songs, but similar safeguards are not always in place to prevent GenAI tools from reproducing celebrity likenesses, voices, or other indicia of personality. Moreover, the state-by-state patchwork of publicity laws can make it a challenge for celebrities to enforce publicity rights. As a result, there are growing calls for a federal publicity statute that would better protect celebrities and other well-known individuals from exploitation by AI. Importantly, agencies and studios utilizing AI-generated content that includes recognizable names, likenesses, or voices can take steps to mitigate infringement risk by ensuring such indicia of personality are not featured prominently in the finished work or used in a manner that could lead to confusion about the personality’s sponsorship or endorsement of or affiliation with the work. For expressive works, the First Amendment may also provide a viable defense from liability.
6. Accuracy and Reliability
For all their well-deserved accolades and hype, generative AI tools remain a work in progress. Users, especially commercial enterprises, should never assume that AI-created works are accurate, non-infringing, or fit for commercial use. In fact, there have been numerous recorded instances in which GenAI tools have created works that arguably infringe the copyrights of existing works, make up facts, or cite phantom sources. It is also important to note that works created by GenAI may incorporate or display third-party trademarks or celebrity likenesses, which generally cannot be used for commercial purposes without appropriate rights or permissions. Like anything else, companies should carefully vet any content produced by GenAI before using it for commercial purposes.
7. Data Security and Confidentiality
Before utilizing GenAI tools, companies should consider whether the specific tools adhere to internal data security and confidentiality standards. Like any third-party software, the security and data processing practices for these tools vary. Some tools may store and use prompts and other information submitted by users. Other tools offer assurances that prompts and other information will be deleted or anonymized. Enterprise AI solutions, such as Azure’s OpenAI Service, can also potentially help reduce privacy and data security risks by offering access to popular tools like ChatGPT, DALL-E, Codex, and more within the data security and confidentiality parameters required by the enterprise.
Before authorizing the use of generative AI tools, organizations and their legal counsel should (1) carefully review the applicable terms of use, (2) inquire about access to tools or features that may offer enhanced privacy, security, or confidentiality, and (3) consider whether to limit or restrict access on company networks to any tools that do not satisfy company data security or confidentiality requirements.
Contacts
- Related Industries