The executive order on AI safety from the Biden Administration has laid out its standards for the industry though its vagueness has raised concerns among the AI community over stifling innovation.The order established six new standards for AI safety and security, along with intentions for ethical AI usage within government agencies. Biden said the order aligns with the government’s own principles of “safety, security, trust, openness.
He also pointed out that for developers, anticipating future risks according to the legislation based on assumptions of products that aren’t fully developed yet is tricky. “Companies that continue to value data compliance and privacy and unbiased algorithmic foundations should operate within a paradigm that the government is comfortable with.”Martin Casado, a general partner at the venture capital firm Andreessen Horowitz, posted on X, formerly Twitter, that he, along with several researchers, academics and founders in AI, has sent a letter to the Biden Administration over its potential for restricting open source AI.
Jeff Amico, the head of operations at Gensyn AI, also posted a similar sentiment, calling it “terrible” for innovation in the U.S.Adobe, IBM, Nvidia join US President Biden’s efforts to prevent AI misuse Matthew Putman, the CEO and co-founder of Nanotronics - a global leader in AI-enabled manufacturing, also commented to Cointelegraph that the order signals a need for regulatory frameworks that ensure consumer safety and the ethical development of AI on a broader scale.
Putman said that fears about AI’s “apocalyptic” potential are “overblown relative to its prospects for near-term positive impact.”