Why Generative A.I. Poses Risks for Early Adopters

Illustration: Getty Images.

Recent blunders from Google and SnapChat have the private and public sectors spooked.

If you’re looking to implement generative A.I. into your business, experts have one big piece of advice: Be careful, because the technology is getting increasingly out of hand. A new song from Drake and The Weeknd titled “Heart On My Sleeve” recently went viral and seemed poised to be the song of the summer, until it was abruptly removed from all monetized platforms last week. Turns out, the track wasn’t made by Drake or The Weeknd, it was made by an A.I., and the original artists weren’t getting paid.

The incident is one of several that have underscored the need for rules and regulations around the use of A.I. Remember when fake, A.I.-generated images of the Pope took social media by storm? Getty Images is currently suing the creators of the text-to-image generator Stable Diffusion, alleging that it used Getty’s massive database of images to train its A.I. model without a license. If Getty is successful in the suit, businesses that used Stable Diffusion to create imagery could be liable.

Rahul Rajan, founder of motion capture technology firm Uplift Labs, says that the floodgates have opened for A.I. “The pace of progress is simply mind-boggling, with new papers, projects, and models being released every day. It’s natural that people start looking for the brakes.”

Rajan’s company provides motion capture services to athletes so they can track their movement and track changes over time. With generative A.I., Rajan says his platform can synthesize the collected data and explain it in an understandable way, but he cautions that putting generative A.I. in the wrong hands could spell disaster, particularly when it comes to the spread of misinformation. He adds that while regulations may risk stifling innovation, creating a clear set of rules governing the use of generative A.I. could help to assuage some of the general public’s confusion around the technology, and in turn accelerate its adoption.

Two figures pushing for more responsibility in A.I. are Hemant Taneja, CEO of venture capital firm General Catalyst, and Darren Walker, CEO of the Ford Foundation, which focuses on reducing poverty and injustice through grants and investments. Both organizations are members of Responsible Innovation Labs, a coalition of founders and investors focused on introducing innovative technologies in a responsible way. In an interview with Inc., the pair said the tech industry can’t afford to continue approaching A.I. with a “move fast and break things” ideology. That approach, they say, has resulted in the release of several problematic A.I. products.

In the past few months, there’s been a flurry of companies trying and failing to incorporate generative A.I. into their products. Snapchat recently released “My AI,” a ChatGPT-powered chatbot that can engage in natural-language conversation with users. One user posted on Twitter that, while pretending to be a 13-year-old girl, he had successfully asked the bot for advice on how to lie to parents, which could have opened the company up to potential lawsuits. Elsewhere, Google parent company Alphabet lost $100 billion in market value after the disastrous launch of a ChatGPT-like chatbot, Bard, which had trouble even doing basic math and would consistently provide false information to simple queries.

For Taneja, this unrestricted use of technology that still isn’t fully understood is a red flag. “It’s like we all figured out how to make a nuclear bomb, and now everyone is rushing to make their own,” he says. “You can’t bring powerful technology like this into the world that way. It has to be intentional.”

Responsible Innovation Labs is developing standards and best practices for the use of artificial intelligence, and Walker says that his foundation is supporting efforts to place technologists across offices in D.C. Earlier this month, Senator Chuck Schumer launched an effort to develop a framework for regulating artificial intelligence, writing in a statement that the framework would “prevent potentially catastrophic damage to our country while simultaneously making sure the U.S. advances and leads in this transformative technology.”

In China, regulators are a few steps ahead. In early April, draft measures for managing generative artificial intelligence services were issued. The proposed rules call for a ban on all content that doesn’t reflect the Socialist Core Values or disrespects intellectual property rights. The European Union is also ahead of the United States, and recently passed a draft version of a legal framework to regulate A.I.

For entrepreneurs racing to become early adopters of generative A.I., Walker offered some advice: “Get an engineer, a political scientist, a sociologist, a poet, and a lawyer together in a room, and don’t stop talking until you’ve hit on a way to innovate that everyone can agree on.”