5 Legal Pitfalls to Avoid If You Use Generative A.I. for Your Business

Illustration: Getty Images.

A.I. tools are convenient for creating text, images, and code–but keep in mind these legal concerns.

Sam Altman, the founder and CEO of OpenAI, told senators yesterday that the government needs to put guardrails on artificial intelligence. OpenAI’s best-known product, ChatGPT, has exploded in popularity over the past few months as part of a new crop of tools–collectively called generative artificial intelligence–that can create images, text, and even computer code based on brief text prompts.

Widely used systems like email platform Mailchimp and web-hosting platform GoDaddy let small-business owners experiment with using A.I.-generated marketing copy for their mailing lists and websites. And A.I. content platform Jasper shot to a valuation of $1.5 billion in two years by giving business owners tools to create A.I.-generated text that matches their brand’s voice.

Such tools can be a boon for small-business owners who don’t have a staff of writers, artists, and developers on hand–business owners who look to generative A.I. to help “lighten the lift” so “they’re not staring at a blank box,” according to Ryan Cantor, chief product officer of small-business communications platform Thryv.

But generative A.I. tools are so new that the legal frameworks for using them are still being drawn. And the nature of these products means there are dangers built right in: The systems learn to create new outputs by pulling in vast stores of information, which often means everything they can get off the internet. And the systems continually improve via inputs from users. That means that whatever a generative A.I. tool puts out could be wrong, dangerous–or a regurgitation of another company’s intellectual property.

Experts advise talking to your employees now and setting guidelines for how they and the company are going to use generative A.I. Here are a few things to start thinking through.

1. Fact-check A.I.-generated content

A.I. text generators such as ChatGPT can sometimes “hallucinate” or spit out convincing-sounding fake information. One of the biggest risks with these tools is if your company generates marketing copy for a product and that copy turns out to include false information, the company could expose itself to a false advertising claim, says Justin Pierce, co-chair of the intellectual property division at the Washington, D.C.-based law firm Venable.

For this reason, Cantor recommends using A.I.-generated content as a starting point, not the final version of your copy. Asking the program to cite sources for the text it generates and doing your own vetting can also help ensure your materials are accurate.

2. Don’t generate images of celebrities

A.I. image generators can produce works in the style of a particular artist as well as images that look like a notable person. It might seem like a fun way to illustrate a newsletter or create a viral social media post (remember those images of Pope Francis appearing to wear Balenciaga?)–but using images of a real person or character could land your company in trouble.

“As a matter of policy, avoid putting in trademarks, celebrities, well-known images, or popular fictional characters in the prompt,” says Pierce. “When you do that, you’re going to generate content that has a high probability of being subject to claims of copyright infringement, rights of publicity violations, or trademark infringement.”

3. Consider your copyright protections

In March, the U.S. Copyright Office released guidance saying that works such as art, music, or writing where the human generates the prompt but the output is created by an A.I. system are generally not protected under copyright law. So, you may not have recourse if someone copies work you created using a machine.

But there are exceptions: When A.I.-generated content is incorporated into a larger work, the use of A.I. needs to be disclosed, but the human-created portions can be subject to copyright protections.

4. Protect your IP

Code, images, or text entered by users is often retained by the A.I. system, where it could be discovered by someone else. Recently, it was reported that employees at Samsung put proprietary information into ChatGPT.

“You have to be really careful to not put your own confidential or proprietary information into one of the generative A.I. platforms,” says Pierce. “For instance, let’s say employees are using it because they want to improve their software. Maybe they’re looking for ways to more efficiently complete some lines of code, and they put in lines of company code, and ask the platform how best to finish it. Be careful, because what those employees may be doing is putting in code that is proprietary to your company, and undermining your ability to protect confidential company information or trade secrets.”

Tony Pietrocola, co-founder of the Cleveland-based cybersecurity firm AgileBlue, warns that if your company’s code is discovered, hackers could probe it for vulnerabilities–perhaps using the exact same generative A.I. tool.

Some systems offer users the option to not share their queries with the company, and switching to a paid version can give users more control over how their data is shared.

5. Protect your customers

Take care to keep customer data–including phone numbers, account numbers, and health information–out of A.I.-powered chatbots and systems for generating correspondence.

Pietrocola says that he has seen companies putting customers’ financial or health information into systems like ChatGPT to help understand and aggregate the data. Such inadvertent disclosures could violate privacy laws, particularly for companies in heavily regulated industries like finance and health care. And personal customer information could potentially be discovered and used by hackers to commit fraud.