Tech Firms Agree To AI Safeguards Set By White House

Tech Firms Agree To AI Safeguards Set By White House

WASHINGTON -- President Joe Biden said Friday that new commitments by Amazon, Google, Meta, Microsoft and other companies developing AI technologies to meet the White House's AI suite of safeguards are important steps in managing the "extraordinary" opportunities and risks associated with the technology.

Biden announced that his administration had secured voluntary pledges from seven US companies to ensure the safety of AI products ahead of their release. Some mandates require third-party oversight of the operation of the next generation of AI systems, though they don't specify who will verify the technology and hold the company accountable.

"We need to be clear and alert to the threats new technologies can create," Biden said, adding that companies have a "fundamental responsibility" to ensure the security of their products.

“Social media has shown us the damage advanced technology can do without the right safeguards,” Biden added. "This engagement is a promising step, but we have a lot of work to do together."

A surge in commercial investment in AI tools that can write text as persuasive as humans and generate new images and other media has sparked public outcry, as well as concern about their ability to mislead people and spread disinformation, among other risks.

The four tech giants, along with OpenAI founders ChatGPT and startups Anthropic and Inflection, have engaged in security tests "conducted in part by independent experts" to protect against major risks such as biosecurity and cybersecurity, the White House said in a statement.

The test will also examine potential public harms, such as bias and discrimination, and the more theoretical risks related to advanced AI systems being able to gain control of physical systems or "replicate" themselves by making copies of them.

The company also promises to use methods to report vulnerabilities in its systems and digital watermarks that help distinguish real images or audio from those generated by artificial intelligence, known as "deep fakes".

On Friday, executives from seven companies met behind closed doors with Biden and other officials, pledging to meet the standards.

He "said very loudly and clearly" that he wanted the company to continue to innovate, but at the same time "felt that it needed a lot of attention," Inflection CEO Mustafa Suleiman said in an interview after the White House meeting.

"It's a big deal bringing all the labs together, all the companies," said Suleiman, whose Palo Alto, California-based startup is the youngest and smallest company. "It's very competitive and we wouldn't be in another situation."

Under this commitment, companies will also publicly disclose the pain points and risks of their technology, including the impacts on fairness and bias.

Voluntary pledges should be an immediate way to reduce risk ahead of a long-term push for Congress to pass laws regulating the technology.

Some proponents of AI regulation say Biden's move is a start, but more needs to be done to hold companies and their products accountable.

“It is not enough to have closed discussions with corporate stakeholders that lead to voluntary assurances,” said Amba Kak, executive director of the AI ​​Now Institute. "We need a much broader public debate, addressing issues that companies almost certainly won't voluntarily participate in because they will produce fundamentally different outcomes that can directly affect business models."

Suleiman said that, although voluntarily, accepting the "red team" test of an AI system was not an easy promise.

“Our commitment to have the red team essentially break our schemas, find the vulnerabilities, and then share those technical ones with other major language schema developers is a real commitment,” said Suleiman.

Senate Majority Leader Chuck Schumer, RN.Y., said he would introduce legislation to regulate artificial intelligence and work with the Biden administration "and our bipartisan colleagues" to deliver on the promises he made on Friday.

Several tech executives have called for regulation, and several have already attended a White House summit in May.

Microsoft Chairman Brad Smith said in a blog post on Friday that his company has made a number of commitments that go beyond White House promises, including supporting legislation that would create "a licensing regime for high-performance models."

Some pundits and newcomers fear that regulation could benefit wealthy leaders led by OpenAI, Google and Microsoft, as smaller players are frustrated by the high costs of complying with their AI systems.

The White House pledge noted that it only applies to models that are "stronger than the usual industry limits" set by recent models, such as the GPT-4 and DALL-E 2 OpenAI image generators and similar releases from Anthropic, Google and Amazon.

While some countries have looked for ways to regulate AI, EU lawmakers are negotiating comprehensive AI rules for a bloc of 27 countries that could restrict apps deemed most harmful.

UN Secretary-General Antonio Guterres recently said the United Nations is the "ideal place" to adopt global standards and has appointed a panel to report on AI global governance options later this year.

Guterres also said he is considering requests from several countries to create a new United Nations organization to support global efforts to manage artificial intelligence, inspired by models such as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.

On Friday, the White House said it was holding consultations with several countries on voluntary pledges.

The compromise largely focuses on security risks, but does not address other issues related to the latest AI technologies, including their impact on jobs and market competition, the environmental resources needed to create models, and copyright issues for works, artworks and other human creations in which AI systems are used to create human content.

Last week, OpenAI and the Associated Press announced a deal with an AI company to license the AP news archive. The amount he will pay for this content has not been disclosed.

---

O'Brien reporting from Providence, Rhode Island.

Today's breaking news and more delivered to your inbox

AI Expert: We Urgently Need Ethics Guidelines and Safeguards to Limit AI Risks