Snapchat confirms My AI mishap 'just a glitch'
Snapchat's My AI feature, an in-app AI chatbot debuted earlier this year, uploaded its own story on Tuesday and stopped replying to users.
Snapchat users were alarmed after the My AI chatbot launched in April posted its own story and stopped responding to users. The Snapchat feature is run by OpenAI's ChatGPT technology and "freaked out" users in a technological glitch.
One user wrote on X that the action had "freaked them out."
My Snapchat AI posted a random 1 second story and isn’t replying to me AND IM FREAKED OUT
— Ryan™ (@RyanJKrul) August 16, 2023
Some mistook the bot's image as being a photo of their own ceiling. When users attempted to chat with the bot, the AI responded by stating that it "encountered a technical issue."
Did Snapchat Ai just add a picture of my wall/ceiling to their Snapchat story?
— Matt Esparza (@matthewesp) August 16, 2023
Snapchat AI - Left
My wall/ceiling- Right pic.twitter.com/bh8I3Aiwun
Snapchat confirmed Wednesday evening that the issue was simply a "technical glitch" and a spokesperson confirmed to TechCrunch that it was "now resolved".
The glitch poses the question of the possibility of the AI bot being able to post its own stories, something a spokesperson confirmed to TechCrunch was not yet feasible. The AI bot is capable of transmitting text messages and can even Snap back with images.
Read more: Boosted productivity through ChatGPT comes at huge risks
Snap's My AI was a contentious new addition to the app, with users granting it one-star reviews and calling for it to be disabled.
A Washington Post investigation also revealed the bot had the potential to inappropriately respond to minors, prompting Snap to induce parental controls and further safety precautions.
On August 11, Allistair Barr of Business Insider revealed that there are plenty of spider bots; digital spiders crawling over websites and collecting data for years.
Barr notes that OpenAI has recently come out and admitted to having one of these bots on the loose in the cyber world.
It is referred to as GPTbot, a tool used to scrape and gather web material for AI model training. GPT-5, the next large model, will most likely be trained using the data collected by this bot.
Read more: AI-generated tweets considered more trustworthy than humans': Study
Last month, Google, Microsoft, OpenAI, and Anthropic announced a new council to monitor the safe development of the most advanced models of AI.
The four influential firms founded the Frontier Model Forum, an organization focused on the "safe and responsible" creation of frontier AI models, meaning AI technology that is more sophisticated than examples currently accessible.
In May, the Center for AI Safety warned that artificial intelligence (AI) technology should be classified as a societal risk and put in the same class as pandemics and nuclear wars.
Geoffrey Hinton, dubbed the godfather of AI, quit Google in May, citing AI's "existential risk".