- You can now stop Google from using the content on your website to train its AI machine
- Now the Google-Extended flag in robots.txt can tell Google’s crawlers to include a site in search without using it to train new AI models like the ones powering Bard.
- Some websites, including The New York Times, have opted to legally prohibit companies from utilizing their content for AI training by updating their terms of service.
Google has unveiled a new tool named Google-Extended, providing website publishers with the option to exclude their data from being utilized to train Google’s AI models while still ensuring accessibility through Google Search. This tool enables sites to continue being scraped and indexed by web crawlers like Googlebot while preventing their data from contributing to AI model training.
Google-Extended offers publishers control over whether their websites are used to enhance Bard and Vertex AI generative APIs. It empowers web publishers to manage access to their content while exempting their data from AI training purposes. Google had previously disclosed its intention to train its AI chatbot, Bard, using publicly accessible web data.
Danielle Romain, Google’s VP of Trust, explained in a blog post that the company recognizes the desire of web publishers for greater choice and control regarding how their content is employed in emerging generative AI applications. To utilize Google-Extended, publishers can simply disallow “User-Agent: Google-Extended” in their site’s robots.txt file, which instructs automated web crawlers on accessible content.
Google expressed its commitment to exploring additional machine-readable approaches to offer web publishers more choices and control as AI applications expand. The company emphasized that it would share further developments in this regard soon.
Several websites have already taken steps to block web crawlers used for data scraping and AI model training, including those used by OpenAI’s ChatGPT. Notable sites such as The New York Times, CNN, Reuters, and Medium have implemented measures to restrict access to their content for AI training purposes. Blocking Google, however, presents unique challenges since complete exclusion from Google’s crawlers would result in a loss of search engine indexing. Some websites, including The New York Times, have opted to legally prohibit companies from utilizing their content for AI training by updating their terms of service.
Medium recently announced its universal blocking of web crawlers until more nuanced solutions become available, echoing concerns expressed by numerous other websites grappling with the balance between indexing and data protection.
Google’s introduction of Google-Extended offers web publishers a more selective approach to participating in AI training data, aligning with evolving preferences in the digital publishing landscape.