How a billionaire-backed network of AI advisers took over Washington

  • 📰 politico
  • ⏱ Reading Time:
  • 107 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 46%
  • Publisher: 59%

Business Business Headlines News

Business Business Latest News,Business Business Headlines

A sprawling network spread across Congress, federal agencies and think tanks is pushing policymakers to put AI apocalypse at the top of the agenda — potentially boxing out other worries and benefiting top AI companies with ties to the network.

Pictures taken inside September’s Senate AI Insight Forum — a meeting of top tech CEOs that was closed to journalists and the public — show at least two Horizon AI fellows in attendance. | Stefani Reynolds/AFP/Getty ImagesAn organization backed by Silicon Valley billionaires and tied to leading artificial intelligence firms is funding the salaries of more than a dozen AI fellows in key congressional offices, across federal agencies and at influential think tanks.

Mike Levine, a spokesperson for Open Philanthropy, stressed the group’s separation from Horizon. He said Horizon “originally started the fellowship as consultants to Open Phil until they could launch their own legal entity and pay fellows’ salaries directly,” but that even during that period Open Philanthropy “did not play an active role in screening, training, or placement of fellows.”

When asked about the ethical and conflict-of-interest issue, Horizon co-founder and executive director Remco Zwetsloot said the fellowship program is “not for the pursuit of particular policy goals,” does not screen applicants for belief in long-term AI risks and includeson AI’s existential dangers. He said Horizon does not direct fellows to particular congressional offices.

Effective altruism, or EA, has become a popular approach in Silicon Valley circles, and counts among its adherents key figures at companies like OpenAI, Anthropic and DeepMind. Some of those individuals signed onto ais at “risk of extinction from AI.” That letter was organized by the Center for AI Safety, a group that

As with Schmidt’s network, Open Philanthropy’s influence effort involves many of the outside policy shops that Washington relies on for technical expertise.to study biosecurity, which overlaps closely with concerns around the use of AI models to develop bioweapons.

Many AI experts dispute Levine’s claim that well-resourced AI firms will be hardest hit by licensing rules. Venkatasubramanian said the message to lawmakers from researchers, companies and organizations aligned with Open Philanthropy’s approach to AI is simple — “‘You should be scared out of your mind, and only I can help you.’” And he said any rules placing limits on who can work on “risky” AI would put today’s leading companies in the pole position.

Hiday rejected the idea that RAND’s work on AI’s long-term risks would distract lawmakers from more immediate harms. “The analytic community can, and should, discuss catastrophic AI risks at the same time it considers other potential impacts from the emerging technology,” Hiday said.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 381. in BUSİNESS

Business Business Latest News, Business Business Headlines