, president of the Signal Foundation and co-founder of the AI Now Institute at NYU, to sort through the real threat of A.I. and what the doomerism discourse is missing. Our conversation has been edited and condensed for clarity.What do you make of the concerns raised by Geoffrey Hinton and others when it comes to A.I. safety?The risks that I see related to A.I. are that only a handful of corporations have the resources to create these large-scale A.I.
That data is then being wrapped up into these machine learning models that are being used in very sensitive ways with very little accountability, almost no testing, and backed by extremely exaggerated claims that are effectively marketing for the companies that stand to profit from them.You work with the AI Now Institute, which argues that nothing about artificial intelligence is inevitable.
Part of the narrative of inevitability has been built through a sleight of hand that for many years has conflated the products that are being created by these corporations—email, blogging, search—with scientific progress. The message, implicitly or explicitly, has beenFor a long time, that staved off regulation. That intimidated people who didn’t have computer science degrees because they didn’t want to look stupid. That led us, in a large part, to where we are.
United States United States Latest News, United States United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: Cointelegraph - 🏆 562. / 51 Read more »
Source: CNBC - 🏆 12. / 72 Read more »
Source: Cointelegraph - 🏆 562. / 51 Read more »
Source: HarvardBiz - 🏆 310. / 63 Read more »