Brand publishers are increasingly using generative artificial intelligence to aid with content production, but legal questions are arising as they do so. Does use of AI breach copyright laws? Are brands culpable for AI-generated inaccuracies? And are they legally required to disclose their use of AI to consumers?
Toolkits spoke with lawyers and IP experts and about the legal considerations brand publishers should keep in mind as they experiment with generative artificial intelligence.
AI content can be copyrighted – but with some caveats
The most important consideration for brands is regarding copyright and ownership of AI-generated content. On March 25, the U.S. Copyright Office released new guidance – its first ever on AI content – that said that works created using the help of artificial intelligence may be copyrightable, but only if “sufficient” human authorship was involved.
Put another way, the only way AI-generated content or work can be copyrighted by brands is if humans were involved in its production or authorship. The Copyright Office said in its guidance that the scope and depth of what constitutes human authorship is already being challenged as more and more AI-generated work floods the market.
According to the guidance, works that are created with no human intervention or involvement cannot be copyrighted. That includes any content created in response from a prompt that a human entered. According to attorneys Evan Fourvitz and S. Lara Ameri at Ropes & Gray, “the ‘traditional elements of authorship’ are determined and executed by the technology—not the human user—and, thus, the resulting work is not copyrightable.”
In 2023, the Copyright Office also adjudicated on a case involving images created by Kris Kashtanova, who illustrated a graphic novel with images generated by Midjourney in response to their prompts. The Office determined that these images were not copyrightable because when AI “determines the expressive elements of its output, the generated material is not the product of human authorship.”
One of the key issues here is that copyright law for content is already relatively squishy. Anyone in the business of creating content has experienced other entities reproducing part or all of their work, with little recourse. Fighting expensive copyright battles isn’t usually an option. One of the potential effects of more AI-generated content in the marketplace may unwittingly lead to less chance of content stealing or plagiarism, since things are cheaper to make using AI. “Does it matter when you’re saving so much money on the production side?” is a chief consideration for many brands, said one IP lawyer. “But, if you want to err on the side of protectability, have a human editor interact” with the content, said this lawyer.
Use of AI tools could breach copyright laws
Brands using AI tools to generate content or aid in content production may themselves be in breach of copyright laws. At question is generative AI’s tendency to be trained on available information – which means that those using certain tools may be reproducing other people’s work (and passing it off as their own.)
One of the cases being watched closely by the legal community is a class action motion that accuses Microsoft, GitHub and OpenAI of violating copyright law and allowing Copilot to regurgitate licensed code snippers, without credit. In another case, Getty Images is taking Stability AI to court for using images from its site without permission to train its art generating AI.
There are also a number of cases where content creators and artists are protesting against their work being remixed by generative AI platforms. Just this week, comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey — are suing OpenAI and Meta each in a US District Court over dual claims of copyright infringement. They claim that the models were trained on illegally acquired datasets containing their work.
Geographic considerations also matter, say legal experts: Some countries, such as the U.K., appear to be becoming much more permissive in their attitudes, allowing mining of content for “any purpose.”
In this case, the takeaway for brands is if there may be legal issues that arise if they unwittingly use copyrighted material in their own publishing – particularly when that content is being used to market products. One IP lawyer who wasn’t authorized to speak on record said that for her, brand publishing may even potentially be more at risk for issues since it is content that is used for marketing purposes.
Factual accuracy is the biggest concern
Content accuracy has already come up as a common issue for publishers using AI. A few months ago, CNet found errors in 41 of the 77 stories it published that were written using an AI tool. AI is ultimately only as good as the content it is trained on, which means that brand publishers potentially using AI-powered writing assistants or tools that help create drafts or outlines may run into issues where the ensuing product has factual issues.
AI also struggles with certain tonal patterns like sarcasm or irony, and generative AI tools may have a hard time detecting content inaccuracies.
At the same time, those seeking to challenge any false information created by LLMs may find it difficult to do so. For defamation suits, “malice” has to be proven, which makes it difficult for any cases to go anywhere since it’s hard to say the AI was malicious in what it produced. That means suits are likelier to be brought against publishers than against the creators of those AI tools.
If brands are using commercial tools to produce AI content, they should look for indemnification language in the terms and conditions when they purchase the product to see what legal protections they are entitled to if factually inaccurate information is released, lawyers say. For example, Shutterstock offers enterprise customers full indemnification for the license and use of generative AI images they get on their platforms, as does Adobe.
Ultimately, however, “it comes down to fact checking, and digital literacy.” “You wouldn’t publish something without reviewing it, it’s the same with AI,” said one lawyer.
Brands will need to decide how much they will disclose
One of the key considerations surrounding the use of AI is the issue of disclosure. If brand publishers are using generative AI tools to create content, they will eventually need to decide if – and to what extent – they will disclose how that content was produced.
One of the issues is around plagiarism: Can AI-produced content be considered plagiarism? And what might that mean for distribution on platforms. For example, observers noted recently that a Substack publication called The Rationalist used a combination of AI tools to remix content produced by reporter Alex Kantrowitz. Substack left the publication’s post up, even though on the face of it, it appeared to violate its rules about plagiarism.
This boils down to the issue of ownership. How much remixing needs to happen before the work can be passed off as someone else’s? And should audiences’ be told when AI tools were involved in the creation of that work?
How AI output reflects the training material it uses to create its final product is part of the question. If training material is reflected in overall output to a large degree, it is called “overfit.” If it’s not reflected very much, it’s “underfit.” However, legally (and ethically) where the line is between overfit- and underfit is still a gray area.
In order to protect themselves, brand publishers will need to figure out to what extent they can protect their output from being “crawled” by chatbots, for example. Registration-walled or paywalled content is one line of defense, as well as new tools like CommonCrawl.
Ultimately, legal considerations around the use of AI are evolving rapidly. Brands that are serious about using AI as part of their overall content strategies will need to figure out approaches and risk-mitigation strategies that works for their needs. At the same time, most lawmakers are still figuring out what “AI” even is, and a broad standard for AI governance remains a future goal.