Congress Warns Science Agency Over AI Grant To Techlinked Think Tank

Congress Warns Science Agency Over AI Grant To Techlinked Think Tank

Key members of the House Science Committee are sounding the alarm over a proposed artificial intelligence research partnership between the National Institute of Standards and Technology and the RAND Corporation, an influential think tank with ties to tech billionaires, the artificial intelligence computer industry and controversial companies.

Lawmakers from both parties sent a letter to NIST on Dec. 14 criticizing the agency for its lack of transparency and failure to announce a competitive process for planned research grants related to the new U.S. Institute for Artificial Security Intelligence.

The lawmakers also warned NIST about the quality of AI security research conducted by outside groups, saying they are generally "shrouded in secrecy," "provide no evidence for their claims" and often disagree on basic definitions or principles not.

"We believe that this work should not be expedited at the expense of proper execution," six lawmakers wrote, including House Speaker Frank Lucas (R-Oklahoma), Rep. Zoe Lofgren (D-Calif.) and the heads of major subcommittees.

NIST, a small agency within the Commerce Department, has played a central role in President Joe Biden's plans for artificial intelligence. The White House ordered NIST to create the AI ​​Security Institute in its October executive order on AI, and earlier this year the agency released an influential framework to help organizations address AI risks.

However, NIST is also notoriously underfunded and will almost certainly need help from other researchers to achieve its expanding AI mission.

NIST has not publicly announced to which groups it plans to award research grants through the Artificial Intelligence Security Institute, and the House Science letter does not mention the organizations involved. But one of them is RAND, according to an AI researcher and an AI policy specialist at a major tech company, both familiar with the situation.

A recent RAND report on biosafety risks posed by advanced artificial intelligence models is cited in the footnotes of the House Letter as a troubling example of research that has failed scientific peer review.

Following the article published Tuesday, RAND spokeswoman Erin Dick said the House committee misrepresented the think tank's report on artificial intelligence and biosecurity. Dick said the report cited in the letter "went through the same rigorous quality assurance process as all RAND reports, including peer review" and that all studies cited in the report also underwent peer review.

A RAND spokesperson did not respond differently when asked about the AI ​​security research partnership with NIST.

Lucas spokeswoman Heather Vaughn said that on Nov. 2 — three days after Biden signed the AI ​​regulation — NIST officials told committee staff that the agency planned to award AI security research grants to two outside groups without any apparent competition. public posting of publications or announcement of funding opportunities. Lawmakers grew increasingly concerned, he said, when the plans were not mentioned during NIST's Nov. 17 public hearing on the AI ​​Security Institute or during a Dec. 11 congressional staff briefing.

Vaughan did not confirm or deny that RAND was one of the organizations named by the committee, nor did he name any other groups that NIST told committee staff they intended to collaborate on AI safety research. A spokesman for Lofgren declined to comment.

RAND's budding collaboration with NIST began after work on Biden's executive order on artificial intelligence, which was written with the active participation of senior RAND staff. The venerable think tank also faces nationwide criticism for accepting more than $15 million in artificial intelligence and biosecurity grants earlier this year from Facebook co-founder Open Philanthropy, a prolific funder of impactful altruistic projects run by billionaires and CEO Dustin Moskowitz. head of Asana, be supported.

Many AI and biosecurity researchers say effective altruists, including RAND CEO Jason Matheny and Chief Information Technology Officer Jeff Olstott, pay too much attention to the potential catastrophic risks posed by AI and biotechnology. Researchers say these risks are largely unsupported by evidence and warn that the movement's ties to major AI companies represent an effort to neutralize corporate rivals or distract regulators from existing AI-related harms.

ALSO READ:Joel Mull

"Many people ask, 'How can RAND still be successful if it accepts open philanthropic funding and receives money from the US government for it?' - said a political expert in the field of artificial intelligence who was granted anonymity due to the sensitivity of the subject.

In the letter, House Democrats warned NIST that "scientific merit and transparency must remain a top priority" and that they expect the agency to "hold recipients of federal funds for AI safety research to the same rigorous standards of scientific and methodological research." . Maintain." a quality that distinguishes the broader federal research enterprise."

A NIST spokesperson said the science agency is "exploring options for a competitive process to support collaborative research opportunities" related to the AI ​​Security Institute, adding that "no decisions have been made."

The spokeswoman did not say whether NIST officials told House science staff in a Nov. 2 briefing that the agency plans to collaborate with RAND on AI safety research. A spokesperson said that NIST "maintains scientific independence in all of its work" and will "carry out its [AI mandate] responsibilities in an open and transparent manner."

Both an AI researcher and an AI policy expert say lawmakers and House Science Committee staff are concerned about NIST's decision to partner with RAND, given the think tank's ties to open philanthropy and the growing focus on the existential risks of AI collaboration.

“The House Science Committee is really interested in measurement science,” said one AI policy expert. “And [the existential risk community] does not correspond to measurement science. They use no criteria.

Ruman Choudhury, an artificial intelligence researcher and co-founder of the nonprofit technology organization Humane Intelligence, said the committee's letter indicates that Congress is beginning to understand "how important measurement is" when it comes to regulating artificial intelligence. 'artificial intelligence.

"There is not only hype about AI, there is also hype about managing AI," Choudhury wrote in an email. According to them, the House letter suggests that Capitol Hill "be aware of ideological and political perspectives expressed in scientific language to reflect how 'AI governance' is defined, based on what we believe". "and accounting". For."