As a part of our artificial intelligence (AI) series, John Monterubio, Loeb & Loeb’s Advanced Media & Technology senior counsel, examines the impact that the insurgence of AI presents in the advertising space. While AI boasts cost- and time-saving benefits, there are several risks that businesses should consider when adopting the latest AI technologies. As legislation catches up to AI, new regulations may alter how businesses leverage new technologies, with concerns over privacy, data collection and intellectual property remaining at the forefront for many regulatory bodies.
As businesses navigate this dynamic terrain, several emerging trends are likely to shape the advertising industry’s future. Targeted advertising has become a core component of most advertisers’ media strategy, and AI has taken targeting technology to the next level by allowing advertisers to quickly change ads based on personal data in real time, making it an effective and profitable advertising tool. How can businesses leverage AI technology that captures and uses personal data while adhering to privacy regulations? On the other hand, regulations continue to chase concerns surrounding the rapid advancement of AI. How can businesses stay ahead of pending legislation? What role can legal counsel play in supporting clients in establishing policies that align with privacy regulations and industry best practices for AI-driven advertising? Below, John explores these topics and emerging trends in AI and advertising, and provides insight into how businesses can best deploy AI technology.
Tell us about your practice and the kinds of advertising matters you generally handle.
I advise agencies and brands on a wide variety of legal issues relating to advertising, social media and technology, including advertising claims, endorsements and testimonials, SAG-AFTRA, privacy, and contract negotiations. More recently, this has included AI and how to navigate the legal risk this technology brings while still using it in a manner that is practical and effective for a business.
As AI continues to transform the advertising landscape, what emerging trends have you observed that are impacting businesses?
I anticipate we will continue to see a surge in the adoption of generative AI for the creation of advertising content. Specifically, there is a growing trend toward leveraging AI to develop and streamline hyper-personalized digital ads that adjust based on data regarding individual viewers’ advertising profiles regarding their demographics, online behavior and preferences.
For example, if I’m a shoe retailer trying to sell shoes to someone who has an advertising profile that skews toward sports, I may typically target digital ads showing athletic shoes to the viewer. However, if that user recently visited a job search site, my ad may change to show a shoe more appropriate for a work setting with ad language tailored to how great the shoe will make me look at a job interview.
In addition, I believe we will see a greater adoption of generative AI in the creation of traditional, non-dynamic ads as businesses seek to capitalize on the cost savings. For instance, in commercials, we may observe a rise in the utilization of AI to generate environments, background elements or virtual actors. This approach is cost-effective and may allow advertisers to produce ads more efficiently. Additionally, it allows businesses the opportunity to allocate saved resources to optimize talent and other business needs.
As targeted advertising becomes more prevalent, how can businesses compliantly leverage AI optimizations that rely on user data?
Advertisers need to be sure that they have the necessary rights for any data that they use in personalized ad targeting, just as they do for anything else they do with the data. This could entail updating disclosures and privacy policies used for data collection to account for the use of AI in ad targeting. Advertisers using AI software to collect data should ensure that they thoroughly vet the technology and understand how it uses data. They should also confirm whether the AI software has options to disable data collection or disclose how data is being used.
What are the primary legal risks when deploying AI in advertising, and how can companies proactively address and mitigate these challenges to ensure compliance with existing regulations?
Deploying generative AI in content creation poses intellectual property infringement risks. For example, any content created using generative AI, whether text, image or video, relies on input that is fed into the AI technology and processed by its algorithms. There is no guarantee that the AI software provider has the necessary rights from the owner of the input to use it to generate new content for the advertiser’s intended purpose or that these algorithms sufficiently alter the underlying input content so that the output content is not too close of a copy of the input content.
To address this risk, we have witnessed certain major technology companies now offer an “AI indemnity” for their technology. However, regardless of whether or not the AI platform offers an indemnity, content creators should not use the generative AI output “as is.” Instead, they should use the output as an initial “rough” draft, revising it to fit the relevant use. This is good practice, not only for legal reasons but also because, while AI has improved to a level where its outputs in many cases look human-created, it is not a complete substitute for human-created work. Any work produced by AI will and should be reviewed and adjusted to make it acceptable, not just to mitigate legal risks but also to ensure it is actually a quality and professional-looking ad.
Confidentiality is another risk posed by AI, as any information that is fed into an AI platform typically becomes part of the database used for its algorithms. We saw similar confidentiality concerns when software-as-a-service and the cloud were introduced. To address these risks on an operational level, advertisers should place restrictions on inputting their confidential information into AI platforms until they have thoroughly vetted these platforms and understand how the data will be stored and used. And to address these risks on a contractual level, the advertiser should ensure that they have strong confidentiality and security obligations in their contract with the AI provider.
An area where I believe many of the risks remain unknown is privacy, given how quickly the legal landscape is evolving and the substantial sensitivities around the use of personal data in both advertising and non-advertising contexts. With the European Union (EU) at the forefront of privacy regulations, it’s likely that the EU will continue to pioneer the privacy laws relating to AI, as they are doing with the tentative EU AI Act agreement. I am doubtful we will see any lawmaking at the federal level in the United States regarding AI anytime soon due to the gridlock in Congress.
We recently hosted a webinar on privacy issues relating to AI, which can be viewed here.
Have there been any recent legislative changes or landmark cases that have impacted or will impact how advertisers use AI?
An important case to watch is the Andersen v. Stability AI case, where multiple artists allege that Stability AI used their artwork without permission to train its Stable Diffusion platform for generating images. It may set a precedent regarding when, and to what extent, using content for training a generative AI model is permissible and whether the resulting output from the AI constitutes an infringement of the training content.
Another case to watch is Kadrey v. Meta Platforms, which is similar to the Stability AI case, except that the generative AI tool in that case uses the plaintiffs’ literary works to generate text. This case raises similar questions about the boundaries of using copyrighted material to train AI models and when and to what extent the generated content infringes upon the original works.
On the legislative front, the EU AI Act is a first of its kind aimed at regulating the development and deployment of AI technologies. The key objective of the law would be to strike a balance between fostering innovation and addressing potential risks like privacy, bias and accountability. As the EU AI Act progresses through the EU’s legislative process, its implications on business will be closely monitored to assess its effectiveness in addressing AI concerns.
Looking ahead, what do you anticipate will be the key legal challenges and opportunities for businesses using AI-driven advertising, and how can legal counsel proactively support their clients?
AI offers many opportunities for enhancing efficiency and reducing costs in business operations. However, the legal challenge will be determining a way to roll out AI in your business while mitigating the many risks inherent in AI technology. To mitigate these risks, businesses should establish comprehensive AI policies that not only outline the permissible uses of AI but also incorporate safeguards to minimize the potential pitfalls of the technology. This is where legal counsel can be beneficial in assisting with developing those policies and providing advice on how to implement and enforce them.
I’ve witnessed an increasing number of clients, especially advertising agencies, that are seeking guidance on developing AI policies and language to use in their own client contracts. I believe we will continue to see more of this as AI continues to be more widely adopted and legislation is passed that impacts how AI is used.
We recently hosted a webinar on AI policies, which can be viewed here.
How is Loeb a leader in the AI and advertising space?
Loeb has always been at the forefront of new technologies, with longstanding experience in the advertising space. We have a strong understanding of how AI works from a technical perspective and the benefits it can bring to advertising. Loeb uses this knowledge to assess the legal risk and provide clients with practical advice on how to leverage AI in a manner that meets their business needs at each client’s level of risk tolerance. Whether we are presenting to clients on the topic of AI, speaking at a conference or working with vendors developing AI tools to help them mitigate risks, there is an ongoing collaborative effort to educate and equip our clients with the resources they need to navigate this evolving landscape.