Reddit is cracking down on AI bots

Reddit is cracking down on AI bots

In May, Reddit announced it would allow OpenAI to train its models on Reddit content for a price. Now, according to The Verge, Reddit will block most automated bots from accessing, learning from, and profiting from its data without a similar licensing agreement.

Reddit plans to do this by updating its robots.txt file, the “basic social contract of the web” that determines how web crawlers can access the site. Most nascent AI companies (including, at one point, OpenAI) train their models on content they’ve scraped from across the web without considering copyright or the Terms of Service of individual sites.

Per The Verge’s Alex Heath, search engines like Google got away with this form of scraping thanks to the “give-and-take” of Google sending traffic back to individual sites in exchange for the ability to crawl them for information. Now, AI companies are tipping the balance by taking that same information and providing it to users without sending them back to the sites the information came from.

Reddit’s chief legal officer, Ben Lee, told The Verge that the parameters of robots.txt are not legally enforceable but that publicizing Reddit’s intention to enforce its content policy is “a signal to those who don’t have an agreement with us that they shouldn’t be accessing Reddit data.”

In a blog post about the change, Reddit noted that “good faith actors – like researchers and organizations… will continue to have access to Reddit content for non-commercial use.” These include the Internet Archive, home to the Wayback Machine.