This Hacker Team Is Bulletproofing AI Models For Companies Like OpenAI And Anthropic

  • 📰 ForbesTech
  • ⏱ Reading Time:
  • 47 sec. here
  • 11 min. at publisher
  • 📊 Quality Score:
  • News: 53%
  • Publisher: 59%

Gray Swan Ai News

Ai Safety,Ai Models,Security

Sarah Emerson is a senior writer who reports on technology companies and culture in Silicon Valley. She's broken news about the empires of billionaires such as Eric Schmidt and fallen billionaire Ryan Breslow. Sarah has also followed the trends and ideologies shaping today's AI zeitgeist.

The researchers behind Gray Swan AI started the company after finding a major vulnerability in models from OpenAI, Anthropic, Google and Meta. Now, they build products that help safeguard them.

The breakneck pace at which AI is evolving has created a vast ecosystem of new companies — some creating ever more powerful models, others identifying the threats that may accompany them. Gray Swan is among the latter but takes it a step further by building safety and security measures for some of the issues it identifies. “We can actually provide the mechanisms by which you remove those risks or at least mitigate them,” Kolter told.

Looking forward, Gray Swan is keen on cultivating a community of hackers, and it’s not alone. At last year’s Defcon security conference, more than 2,000 people participated in an AIoften enlist internal and external red teamers to assess new models, and have announced official bug bounty programs that reward sleuths for exposing exploits around high-risk domains, such as CBRN .a vulnerability in Anthropic’s Claude Sonnet-3.5 — are also valuable resources for model developers.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 318. in BUSİNESS

Business Business Latest News, Business Business Headlines