As AI leaps forward, concerns rise that innovation is leaving safety behind


When the United States military captured former Venezuelan President Nicolás Maduro in January, it used an AI tool developed by a private U.S. company. It’s unclear exactly what the tool did, but the company’s policy says its products can’t be used for violence or to develop weapons.

Now, the Pentagon is considering cutting ties with that company, Anthropic, because of its insistence on limits for how the military uses its technology, according to Axios.

The tensions between AI safeguards and national security aren’t new. But multiple events in the last month have brought the issue of AI safety – in contexts ranging from weapons development to ethical advertising – into the spotlight.

Why We Wrote This

Artificial intelligence is developing so rapidly that some industry insiders fear safety concerns aren’t getting enough attention. That’s sparking conversation about how to balance innovation, competition, and safeguards.

“A lot of the people who’ve been involved in the field of AI have been thinking about safety in various forms for a long time,” says Miranda Bogen, the founding director of the Center for Democracy and Technology’s AI Governance Lab. “But now those conversations are happening on a much more visible stage.”

This month, researchers resigned from two major U.S. AI companies, citing inadequacies in the companies’ safeguards around things like consumer data collection. In an essay Feb. 9 titled “Something Big is Happening,” investor Matt Shumer warned that AI will not only soon threaten Americans’ jobs en masse, but that it could also start to behave in ways its creators “can’t predict or control.” The essay went viral on social media.

While urging action on very real risks, many AI safety experts caution against overplaying fears about hypothetical scenarios.



Source link

Leave a Comment