This is AI generated summarization, which may have errors. For context, always refer to the full article.
CEO Ted Leonard, who runs the 40-strong company out of Edwards, Colorado, told Reuters he is in talks with multiple tech companies to license Photobucket’s 13 billion photos and videos to be used to train generative AI models that can produce new content in response to text prompts. “There is a rush right now to go for copyright holders that have private collections of stuff that is not available to be scraped,” said Edward Klaris from law firm Klaris Law, which says it’s advising content owners on deals worth tens of millions of dollars apiece to license archives of photos, movies and books for AI training.
Many major market research firms say they have not even begun to estimate the size of the opaque AI data market, where companies often don’t disclose agreements. Those researchers who do, such as Business Research Insights, put the market at roughly $2.5 billion now and forecast it could grow close to $30 billion within a decade.
In the months after ChatGPT debuted in late 2022, for instance, companies including Meta, Google, Amazon and Apple all struck agreements with stock image provider Shutterstock to use hundreds of millions of images, videos and music files in its library for training, according to a person familiar with the arrangements.
OpenAI, an early Shutterstock customer, has also signed licensing agreements with at least four news organizations, including The Associated Press and Axel Springer. Thomson Reuters, the owner of Reuters News, separately said it has struck deals to license news content to help train AI large language models, but didn’t disclose details.
The priciest images in his portfolio are those used to train AI systems that block content like graphic violence barred by the tech companies, said the supplier, who spoke on condition his company wasn’t identified, citing commercial sensitivity. AI systems have been caught regurgitating exact copies of their training data, spitting out, for example, the Getty Images watermark, verbatim paragraphs of New York Times articles and images of real people. That means a person’s private photos or intimate thoughts posted decades ago could potentially wind up in generative AI outputs without notice or explicit consent.
France Dernières Nouvelles, France Actualités
Similar News:Vous pouvez également lire des articles d'actualité similaires à celui-ci que nous avons collectés auprès d'autres sources d'information.
La source: MlaStandard - 🏆 20. / 55 Lire la suite »
La source: BusinessMirror - 🏆 19. / 59 Lire la suite »