Advanced Media & Technology senior counsel John Monterubio offers insights into the “responsible” and “ethical” use of artificial intelligence (AI) in advertising. They are often used interchangeably as buzzwords, and John clarifies the definitions and distinctions between the two, discusses the legal risks of integrating AI in advertising, addresses the rise of generative AI and misinformation, and outlines the policies necessary to compliantly leverage AI’s benefits for advertising.
Tell us about your practice and the type of advertising matters you generally handle.
I have a wide-ranging and diverse practice, but much of my work involves technology transactions and advising on intellectual property (IP) and advertising issues, particularly regarding the use of AI. A few examples of recent matters I have handled include deals involving the development of a bespoke generative AI tool, licensing data for AI model training and advertising services that utilize generative AI. Additionally, I have assisted clients in developing their AI policies and aligning their client or vendor agreements to reflect responsible and ethical AI practices.
How are legal definitions of ‘responsible’ and ‘ethical’ AI use evolving in the advertising industry?
These are two buzzwords that are frequently tossed around interchangeably in the industry but actually refer to two different things. Responsible AI refers to the practice of putting in place safeguards to ensure that AI is used in compliance with relevant regulations and taking accountability for its use. Ethical AI, on the other hand, focuses on establishing and adhering to underlying values in how AI is used. So, an advertiser that wants to use AI ethically and responsibly in its advertising would need to first define its moral values that its AI use should reflect (i.e., the ethical AI principles) and then define the AI policies and procedures that ensure it follows those values (i.e., the responsible AI practices).
Examples of ethical AI principles for advertising are:
- Ensuring fairness and preventing bias in the targeting of advertisements.
- Being transparent about how AI is used.
- Respecting consumer privacy and choice.
Examples of responsible AI practices to ensure that these ethical AI principles are reflected in advertising are:
- Performing due diligence on the AI tools used and their data sources, including bias audits.
- Disclosing when content in an advertisement was created using AI.
- Disclosing to consumers how their data is used with AI and the ability to opt out.
What are the key legal risks that businesses should be aware of when integrating AI into their advertising strategies?
The risks depend on the type of AI and its use case. Advertisers using generative AI to create content for their ads face two different IP risks. The first is that the content output from the AI tool may contain a third party’s IP and if the advertiser uses that third-party IP in its ad (without obtaining permission), they could be subject to an infringement claim from the third party. The second IP risk is that, under current U.S. law, there is no recognizable copyright in generative AI output. Therefore, an advertiser would not have the copyright protection for AI-generated output in their ads as they would for human-created content.
Advertisers using predictive AI for media planning and buying face different legal risks, including privacy risks, as they must ensure that the necessary consents were obtained for any personal information used in the AI and that they remove the personal information of anyone who opts out. There are also other risks, like data bias, which could result in discriminatory ad targeting.
These are just a few examples; there could be others depending on the AI use case, including risk related to rights of publicity and data security. As the AI regulatory landscape is rapidly evolving, with laws such as the EU AI Act, Colorado’s AI Act and California’s AB 1836 and AB 2602, advertisers must quickly adapt to remain in compliance with the new AI laws in their jurisdictions.
What legal issues are likely to arise as generative AI becomes more prevalent in advertising, particularly around consumer misinformation?
Advertisers using generative AI to create content for their ads need to pay extra attention to whether that AI-generated content misleads consumers. For instance, if an advertiser selling beauty products uses generative AI to create content for its ads, the individuals featured in that AI-generated content may have unrealistic characteristics that could give consumers false expectations on the capabilities of those products. Not only would this pose false-advertising legal risks, but it could also result in societal harms by promoting unrealistic beauty standards.
What should advertisers consider when developing their policies for ethical and responsible use of AI in their advertising efforts?
They should look at their corporate values, such as transparency, fairness, diversity, equity and inclusion, and accountability, and use those as the basis to establish their ethical principles. Once the ethical principles are established, they should consider what types of AI they want to use in their advertising, how it will be used, the content that will be inputted into the AI and the output, their risk tolerance, and the operational processes that need to be put in place to comply with those ethical principles.
How is Loeb helping clients navigate these developments?
Loeb has been helping clients establish their ethical and responsible AI policies and guiding them on how to implement those policies in a manner that is practical for their business needs while addressing the risk at a level that meets their risk tolerance. In addition, we are advising clients who are addressing these issues in their services agreements. Through our work with advertisers, agencies and publishers, we understand the perspectives from both sides of the fence, which we use to guide clients to practical solutions that fit their needs.