The Generative AI Revolution: Key Legal Considerations for the Fashion & Retail Industry

For better or worse, generative artificial intelligence (AI) is already transforming the way we live and work. Retail and fashion companies that fail to embrace AI likely risk losing their current market share or, worse, going out of business altogether. This paradigm shift is existential, and businesses that recognize and leverage AI will gain a significant competitive advantage.

Off

For instance, some of our clients are using AI to streamline product design processes, reducing the costs and time necessary to generate designs, while others employ virtual models to circumvent issues related to adult and child modeling. Additionally, AI can provide valuable market intelligence to inform sales and distribution strategies. This alert will address these benefits, as well as other significant commercial advantages, and delve into the legal risks associated with utilizing AI in the fashion and retail industry. 

For background on how AI is impacting the marketplace, generally, click here to watch our latest Fox Forum with Mike Pell, the leader behind The Microsoft Garage.

There are significant commercial advantages to using AI for retail and fashion companies, including:

1. Product Design

From fast fashion to luxury brands, AI is set to revolutionize the fashion and retail industry. It enables the generation of innovative designs by drawing inspiration from a designer’s existing works and incorporating the designer’s unique style into new creations. For instance, in March 2023, G-Star Raw created its first denim couture piece designed by AI. We also worked with a client who utilized an AI tool to analyze its footwear designs from the previous two years and generate new designs for 2024. Remarkably, the AI tool produced 50 designs in just four minutes, with half of them being accepted by the company. Typically, this process would have required numerous designers and taken months to complete. While it is unlikely that AI tools will entirely replace human designers, the cost savings and efficiency gained from using such technology are undeniable and should not be overlooked.

2. Virtual Models

2023 marks a groundbreaking year with the world’s first AI Fashion Week and the launch of AI-generated campaigns, such as Valentino’s Maison Valentino Essentials collection, which combined AI-generated models with actual product photography. Fashion companies allocate a significant portion of their budget to model selection and hiring, necessitating entire departments and grappling with legal concerns such as royalties, SAG, moral issues, and child labor. By leveraging AI tools to create lifelike virtual models, these companies can eliminate the associated challenges and expenses, as AI models are not subject to labor laws — including child entertainment regulations — or collective bargaining agreements.

3. Advertising Campaigns

AI can also be used to create entire advertising campaigns from print copy to email blasts, blog posts, and social media. Companies traditionally invest substantial time and resources in these efforts, but AI can generate such content in mere moments. While human involvement remains essential, AI allows businesses to reduce the manpower required. Retailers can also benefit from AI-powered chatbots, which provide 24/7 customer support while reducing overhead expenses linked to in-person customer service. Moreover, AI’s predictive capabilities enable businesses to anticipate trends across various demographics in real-time, driving customer engagement. By processing and analyzing vast amounts of consumer data and preferences, brands can create hyper-personalized and bespoke content, enhancing customer acquisition, engagement, and retention. Furthermore, AI facilitates mass content creation at an impressively low cost, making it an invaluable tool in today’s competitive market.

4. ESG – Virtual Mirrors and Apps

From an environmental, social, and corporate governance (ESG) standpoint, the use of AI-powered technology can eliminate the need for retail stores to carry excess inventory, thereby reducing online returns and exchanges. AI smart mirrors can enhance in-store experiences for shoppers by enabling them to virtually try on outfits in various sizes and colors. Furthermore, customers can now enjoy the virtual try-on experience from the comfort of their homes, as demonstrated by Amazon’s “Virtual Try-On for Shoes,” which allows users to visualize how selected shoes will appear on their feet using their smartphone cameras.

5. Product Distribution and Logistics

Fashion companies rely on their C-level executives to make informed predictions about product quantities, potential sales in specific markets or stores, and the styles that will perform best in each market. In terms of logistics, AI models can be employed to forecast a business’s future sales by analyzing historical inventory and sales data. This ability to anticipate supply chain requirements can lead to increased profits and support the industry’s initiatives to reduce waste.

To read about additional AI use cases in the fashion industry, click here.

Legal and Ethical Risks

Although AI has some major advantages, it also comes with a number of legal and ethical risks that should be considered, including:

1. Accuracy and Reliability

For all their well-deserved accolades and hype, generative AI tools remain a work in progress. Users, especially commercial enterprises, should never assume that AI-created works are accurate, non-infringing, or fit for commercial use. In fact, there have been numerous recorded instances in which generative AI tools have created works that arguably infringe the copyrights of existing works, make up facts, or cite phantom sources. It is also important to note that works created by generative AI may incorporate or display third-party trademarks or celebrity likenesses, which generally cannot be used for commercial purposes without appropriate rights or permissions. Like anything else, companies should carefully vet any content produced by generative AI before using it for commercial purposes.

2. Data Security and Confidentiality

Before utilizing generative AI tools, companies should consider whether the specific tools adhere to internal data security and confidentiality standards. Like any third-party software, the security and data processing practices for these tools vary. Some tools may store and use prompts and other information submitted by users. Other tools offer assurances that prompts and other information will be deleted or anonymized. Enterprise AI solutions, such as Azure’s OpenAI Service, can also potentially help reduce privacy and data security risks by offering access to popular tools like ChatGPT, DALL-E, Codex, and more within the data security and confidentiality parameters required by the enterprise.

Before authorizing the use of generative AI tools, organizations and their legal counsel should (i) carefully review the applicable terms of use, (ii) inquire about access to tools or features that may offer enhanced privacy, security, or confidentiality, and (iii) consider whether to limit or restrict access on company networks to any tools that do not satisfy company data security or confidentiality requirements.

3. Software Development and Open-Source Software

One of the most popular use cases for generative AI has been computer coding and software development. But the proliferation of AI tools like GitHub Copilot, as well as a pending lawsuit against its developers, has raised a number of questions for legal counsel about whether use of such tools could expose companies to legal claims or license obligations.

These concerns stem in part from the use of open-source code libraries in the data sets for Copilot and similar tools. While open-source code is generally freely available for use, that does not mean that it may be used without condition or limitation. In fact, open-source code licenses typically impose a variety of obligations on individuals and entities that incorporate open-source code into their works. This may include requiring an attribution notice in the derivative work, providing access to source code, and/or requiring that the derivative work be made available on the same terms as the open-source code.

Many companies, particularly those that develop valuable software products, cannot risk having open-source code inadvertently included in their proprietary products or inadvertently disclosing proprietary code through insecure generative AI coding tools. That said, some AI developers are now providing tools that allow coders to exclude AI-generated code that matches code in large public repositories (in other words, making sure the AI assistant is not directly copying other public code), which would reduce the likelihood of an infringement claim or inclusion of open-source code. As with other AI generated content, users should proceed cautiously, while carefully reviewing and testing AI-contributed code.

4. Content Creation and Fair Compensation

In a recent interview, Billy Corgan, the lead singer of Smashing Pumpkins, predicted that “AI will change music forever” because once young artists figure out they can use generative AI tools to create new music, they won’t spend 10,000 hours in a basement the way he did. The same could be said for photography, visual art, writing, and other forms of creative expression.

This challenge to the notion of human authorship has ethical and legal implications. For example, generative AI tools have the potential to significantly undermine the IP royalty and licensing regimes that are intended to ensure human creators are fairly compensated for their work. Consider the recent example of the viral song, “Heart on My Sleeve,” which sounded like a collaboration between Drake and the Weeknd, but was in fact created entirely by AI. Before being removed from streaming services, the song racked up millions of plays — potentially depriving the real artists of royalties they would otherwise have earned from plays of their copyrighted songs. In response, some have suggested that human artists should be compensated when generative AI tools create works that mimic or are closely inspired by copyrighted works and/or that artists should be compensated if their works are used to train the large language models that make generative AI possible. Others have suggested that works should be clearly labeled if they are created by generative AI, so as to distinguish works created by humans from those created by machine.

5. Intellectual Property Protection and Enforcement

Content produced without significant human control and involvement is not protectable by US copyright or patent laws, creating a new orphan class of works with no human author and potentially no usage restrictions. That said, one key principle can go a long way to mitigating IP risk: generative AI tools should aid human creation, not replace it. Provided that generative AI tools are used merely to help with drafting or the creative process, then it is more likely that the resulting work product will be protectable under copyright or patent laws. In contrast, asking generative AI tools to create a finished work product, such as asking it to draft an entire legal brief, will likely deprive the final work product of protection under IP laws, not to mention the professional responsibility and ethical implications.

6. Labor and Employment

When Hollywood writers went on strike, one issue in particular generated headlines: a demand by the union to regulate the use of artificial intelligence on union projects, including prohibiting AI from writing or re-writing literary material; prohibiting its use as source material; and prohibiting the use of union content to train AI large language models. These demands are likely to presage future battles to maintain the primacy of human labor over cheaper or more efficient AI alternatives.

Employers are also utilizing automated systems to target job advertisements, recruit applicants, and make hiring decisions. Such systems expose employers to liability if they intentionally or unintentionally exclude or impact protected groups. According to the Equal Employment Opportunity Commission (EEOC), that’s precisely what happened with iTutorGroup, Inc.

7. Future Regulation

Earlier this year, Italy became the first Western country to ban ChatGPT, but it may not be the last. In the United States, legislators and prominent industry voices have called for proactive federal regulation, including the creation of a new federal agency that would be responsible for evaluating and licensing new AI technology. Others have suggested creating a federal private right of action that would make it easier for consumers to sue AI developers for harm they create. Whether US legislators and regulators can overcome partisan divisions and enact a comprehensive framework seems unlikely, but as is becoming increasingly clear, these are unprecedented times.

If you have questions about any of these issues or want to plan ahead, contact one of the authors or a member of our AI, Metaverse & Blockchain industry team.

Contacts

Continue Reading