Some 3,500 competitors have tapped on laptops seeking to expose flaws in eight leading large-language models representative of technology's next big thing. But don't expect quick results from this first-ever independent "red-teaming" of multiple models.
Conventional software uses well-defined code to issue explicit, step-by-step instructions. OpenAI's ChatGPT, Google's Bard and other language models are different. Trained largely by ingesting -- and classifying -- billions of datapoints in internet crawls, they are perpetual works-in-progress, an unsettling prospect given their transformative potential for humanity.
A team including Carnegie Mellon researchers found leading chatbots vulnerable to automated attacks that also produce harmful content. "It is possible that the very nature of deep learning models makes such threats inevitable," they wrote.In its 2021 final report, the U.S.
Researchers have found that "poisoning" a small collection of images or text in the vast sea of data used to train AI systems can wreak havoc -- and be easily overlooked. Surveying more than 80 organizations, the authors found the vast majority had no response plan for a data-poisoning attack or dataset theft. The bulk of the industry "would not even know it happened," they wrote.
Ross Anderson, a Cambridge University computer scientist, worries AI bots will erode privacy as people engage them to interact with hospitals, banks and employers and malicious actors leverage them to coax financial, employment or health data out of supposedly closed systems.
Business Business Latest News, Business Business Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: VancouverSun - 🏆 49. / 61 Read more »