-Multiple artificial intelligence companies are circumventing a common web standard used by publishers to block the scraping of their content for use in generative AI systems, content licensing startup TollBit has told publishers.
A Wired investigation published this week found Perplexity likely bypassing efforts to block its web crawler via the Robots Exclusion Protocol, or"robots.txt," a widely accepted standard meant to determine which parts of a site are allowed to be crawled.The News Media Alliance, a trade group representing more than 2,200 U.S.-based publishers, expressed concern about the impact that ignoring"do not crawl" signals could have on its members.
The company tracks AI traffic to the publishers' websites and uses analytics to help both sides settle on fees to be paid for the use of different types of content. "What this means in practical terms is that AI agents from multiple sources are opting to bypass the robots.txt protocol to retrieve content from sites," TollBit wrote."The more publisher logs we ingest, the more this pattern emerges."
The AI companies use the content both to train their algorithms and to generate summaries of real-time information.