OpenAI admits to 'spiderbots' crawling over websites, collecting data
According to Business Insider tech editor Allistair Barr, "spiderbots" like Google's Googlebot are used to collect information on web pages that can only be detrimental to users in the long run.
According to Allistair Barr of Business Insider, there are plenty of spiderbots; digital spiders crawling over websites and collecting data for years.
The most active, he claims, is Googlebot, which collects site information automatically so Google can rank and deliver Search results accordingly.
Barr notes that OpenAI has recently come out and admitted to having one of these bots on the loose in the cyber world.
It is referred to as GPTbot, a tool used to scrape and gather web material for AI model training. GPT-5, the next large model, will most likely be trained using the data collected by this bot.
Read more: AI-generated tweets considered more trustworthy than humans': Study
GPT-4, ChatGPT, and other sophisticated models intelligently respond to inquiries promptly, reducing the need to direct users to the original sources of information. This may be a fantastic user experience, but the incentives to offer high-quality free knowledge online begin to dwindle fast, Barr contends.
According to him, it is mere "self-sabotage" to allow the bot to "crawl" on a website, a realization he says is spreading quickly in online communities like The Verge, which has taken steps to block the GPTbot.
Although the company has unveiled a method to disable the bot, some developers speculate that OpenAI has been surreptitiously collecting everyone's internet data for months or years.
Prasad Dhumal, a search engine optimization consultant, Tweeted this week that "finally, after soaking up all your copyrighted content to build their proprietary product, OpenAI gives you a way to prevent your content from being used to further improve their product."
Neil Clarke, the editor of Clarkesworld, a science fiction and fantasy magazine, revealed that they would block another scraping bot from OpenAI, questioning if there is still a secret bot still being used.
No respect for rights of creative professionals
In an email to Barr, Clarke remarked that "OpenAI and other 'AI' creators have demonstrated repeatedly that they have no respect for the rights of authors, artists, and other creative professionals. Their products are largely based on the copyrighted works of others, taken without authorization or compensation."
Clarke added that their "record on transparency leaves much to be desired."
CCBot is yet another computer spider that explores the internet and collects all material. This is managed by Common Crawl, which is a key supply of training data for AI models. Common Crawl consistently retains all of this information, so even if you disable its bot now, your data have very likely already been stolen.
"I'm unaware of anyone that has managed to get Common Crawl to remove data," Clarke noted. "I've tried, but have had no response."
Rather than have an "opt-out" option, others like Clarke are demanding the feature be "opt-in", forcing OpenAI to request permission before scraping data.
According to Clarke, an opt-out option is not sufficient. He believes that it is not the responsibility of a user to provide information for the company without consent, "regardless of the benefits they imagine coming from it."
Barr reached out to OpenAI about the feature and did not receive a response.
OpenAI has attempted to respect certain internet data. GPTbot is now meant to filter out sources that demand paywall access as well as those that are known to collect personally sensitive information.
In addition, the business recently announced a partnership with the Associated Press in which OpenAI would pay to license AP material for AI training data.
Clarke advised online content creators to block the bot and communicate their concerns to lawmakers regarding "past, present, and future data collection methodologies."
Last month, Google, Microsoft, OpenAI, and Anthropic announced a new council to monitor the safe development of the most advanced models of AI.
The four influential firms founded the Frontier Model Forum, an organization focused on the "safe and responsible" creation of frontier AI models, meaning AI technology that is more sophisticated than examples currently accessible.
In May, the Center for AI Safety warned that artificial intelligence (AI) technology should be classified as a societal risk and put in the same class as pandemics and nuclear wars.
Geoffrey Hinton, dubbed the godfather of AI, quit Google in May, citing AI's "existential risk".