Craig Martell, the Pentagon’s chief digital and artificial intelligence officer, wants companies to share insights into how their AI software is built — without forfeiting their intellectual property — so that the department can “feel comfortable and safe” adopting it.
“They’re saying: ‘Here it is. We’re not telling you how we built it. We’re not telling you what it’s good or bad at. We’re not telling you whether it’s biased or not,’” he said. Martell’s team, which is already running a task force to assess LLMs, has already found 200 potential uses for them within the Defense Department, he said.
He hopes the February symposium will help build what he called “a maturity model” to establish benchmarks relating to hallucination, bias and danger. While it might be acceptable for the first draft of a report to include AI-related mistakes — something a human could later weed out — those errors wouldn’t be acceptable in riskier situations, such as information that’s needed to make operational decisions.
日本 最新ニュース, 日本 見出し
Similar News:他のニュース ソースから収集した、これに似たニュース記事を読むこともできます。
ソース: YahooFinanceCA - 🏆 47. / 63 続きを読む »
ソース: SooToday - 🏆 8. / 85 続きを読む »