The Generative AI Revolution: Key Legal Considerations for the Nonprofit & Trade Association Industry
This alert describes how AI is already affecting the nonprofits and associations industry, as well as some of the key legal considerations that may shape the future of generative AI tools. And click here to watch our latest Fox Forum as we talk with Mike Pell, the visionary innovation leader at Microsoft, a principal investor in OpenAI and the trailblazing company behind the creation of ChatGPT.
For nonprofits and trade associations, our clients already are starting to use AI tools like ChatGPT to create marketing communications, member messages, and stunning graphics. We’ve also seen them use AI to identify and summarize competitors’ and members’ business operations for planning purposes, and to create mission and vision statement drafts for strategic planning sessions.
Below, we outline key legal issues organizations should keep in mind.
1. Accuracy and Reliability
For all their well-deserved accolades and hype, generative AI tools remain a work in progress. Users should never assume that AI-created works are accurate, non-infringing, or fit for an organization’s use. In fact, there have been numerous recorded instances in which generative AI tools have created works that arguably infringe the copyrights of existing works, make up facts, or cite phantom sources. It is also important to note that works created by generative AI may incorporate or display third-party trademarks or celebrity likenesses, which generally cannot be used for an organization’s purposes without appropriate rights or permissions. Like anything else, organizations should carefully vet any content produced by generative AI before using it for your organization.
2. Data Security and Confidentiality
Before utilizing generative AI tools, organizations should consider whether the specific tools adhere to internal data security and confidentiality standards. Like any third-party software, the security and data processing practices for these tools vary. Some tools may store and use prompts and other information submitted by users. Other tools offer assurances that prompts and other information will be deleted or anonymized. Enterprise AI solutions, such as Azure’s OpenAI Service, can also potentially help reduce privacy and data security risks by offering access to popular tools like ChatGPT, DALL-E, Codex, and more within the data security and confidentiality parameters required by the enterprise.
Before authorizing the use of generative AI tools, organizations and their legal counsel should (i) carefully review the applicable terms of use, (ii) inquire about access to tools or features that may offer enhanced privacy, security, or confidentiality, and (iii) consider whether to limit or restrict access on organization networks to any tools that do not satisfy the organization’s data security or confidentiality requirements.
3. Software Development and Open Source Software
One of the most popular use cases for generative AI has been computer coding and software development. But the proliferation of AI tools like GitHub Copilot, as well as a pending lawsuit against its developers, has raised a number of questions for legal counsel about whether use of such tools could expose companies and organizations to legal claims or license obligations.
These concerns stem in part from the use of open source code libraries in the data sets for Copilot and similar tools. While open source code is generally freely available for use, that does not mean that it may be used without condition or limitation. In fact, open source code licenses typically impose a variety of obligations on individuals and entities that incorporate open source code into their works. This may include requiring an attribution notice in the derivative work, providing access to source code, and/or requiring that the derivative work be made available on the same terms as the open source code.
Many companies and organizations, particularly those that develop valuable software products, cannot risk having open source code inadvertently included in their proprietary products or inadvertently disclosing proprietary code through insecure generative AI coding tools. That said, some AI developers are now providing tools that allow coders to exclude AI-generated code that matches code in large public repositories (in other words, making sure the AI assistant is not directly copying other public code), which would reduce the likelihood of an infringement claim or inclusion of open source code. As with other AI-generated content, users should proceed cautiously, while carefully reviewing and testing AI-contributed code.
4. Content Creation and Fair Compensation
In a recent interview, Billy Corgan, the lead singer of Smashing Pumpkins, predicted that “AI will change music forever” because once young artists figure out they can use generative AI tools to create new music, they won’t spend 10,000 hours in a basement the way he did. The same could be said for photography, visual art, writing, and other forms of creative expression.
This challenge to the notion of human authorship has ethical and legal implications. For example, generative AI tools have the potential to significantly undermine the IP royalty and licensing regimes that are intended to ensure human creators are fairly compensated for their work. Consider the recent example of the viral song “Heart on My Sleeve,” which sounded like a collaboration between Drake and the Weeknd but was in fact created entirely by AI. Before being removed from streaming services, the song racked up millions of plays – potentially depriving the real artists of royalties they would otherwise have earned from plays of their copyrighted songs. In response, some have suggested that human artists should be compensated when generative AI tools create works that mimic or are closely inspired by copyrighted works and/or that artists should be compensated if their works are used to train the large language models that make generative AI possible. Others have suggested that works should be clearly labeled if they are created by generative AI, so as to distinguish works created by humans from those created by machine.
5. Intellectual Property Protection and Enforcement
Content produced without significant human control and involvement is not protectable by US copyright or patent laws, creating a new orphan class of works with no human author and potentially no usage restrictions. That said, one key principle can go a long way to mitigating IP risk: Generative AI tools should aid human creation, not replace it. Provided that generative AI tools are used merely to help with drafting or the creative process, then it is more likely that the resulting work product will be protectable under copyright or patent laws. In contrast, asking generative AI tools to create a finished work product, such as asking a tool to draft an entire legal brief, will likely deprive the final work product of protection under IP laws, not to mention the professional responsibility and ethical implications.
6. Labor and Employment
When Hollywood writers went on strike recently, one issue in particular generated headlines: a demand by the union to regulate the use of artificial intelligence on union projects, including prohibiting AI from writing or re-writing literary material; prohibiting its use as source material; and prohibiting the use of union content to train AI large language models. These demands are likely to presage future battles to maintain the primacy of human labor over cheaper or more efficient AI alternatives. Meanwhile, the Equal Employment Opportunity Commission (EEOC) is warning companies about the potential adverse impacts of using AI in employment decisions.
7. Future Regulation
Earlier this year, Italy became the first Western country to ban ChatGPT, but it may not be the last. In the US, legislators and prominent industry voices have called for proactive federal regulation, including the creation of a new federal agency that would be responsible for evaluating and licensing new AI technology. Others have suggested creating a federal private right of action that would make it easier for consumers to sue AI developers for harm they create. Whether US legislators and regulators can overcome partisan divisions and enact a comprehensive framework seems unlikely, but as is becoming increasingly clear, these are unprecedented times.
If you have questions about any of these issues or want to plan ahead, contact one of the authors or a member of our AI, Metaverse & Blockchain industry team. ArentFox Schiff’s Nonprofits & Trade Associations team advises on all aspects of your organization’s operations.
Contacts
- Related Industries