Cloudflare Rolls Out Permission-Based Model to Protect Content from AI Scraping

Cloudflare

Cloudflare has announced it will become the first internet infrastructure provider to block AI crawlers by default, ushering in a new permission-based model that empowers website owners to control how their content is used by AI companies. This move aims to address a growing imbalance where AI crawlers scrape articles, images, and other original material without credit or compensation, undermining traditional internet economics that reward content creators through search engine-driven traffic and advertising revenue. “If the internet is to survive the age of AI, we must give publishers back control of their content and create a new economic model that works for everyone,” said Matthew Prince, Cloudflare’s co-founder and CEO.

Also Read: Alltius Debuts Agentic AI Suite to Transform Financial Services

Publishers like Condé Nast, Dotdash Meredith, Gannett Media, Pinterest, Reddit, and Ziff Davis have praised the initiative as a critical step toward fairer treatment of intellectual property and sustainable digital ecosystems. With this update, AI crawlers must now disclose their purpose and secure explicit permission to access sites. Cloudflare’s vast network and advanced bot management already process trillions of daily requests, enabling precise detection of AI bots. By collaborating on transparent bot identification protocols, Cloudflare’s default blocking and new registry tools are positioned to protect publishers’ content while allowing AI companies to innovate within agreed boundaries.

Read More: Cloudflare limits scraping by AI crawlers on the internet – permission-based approach enables new business model