Meta reveals hackers utilizing AI to mine user information
Guy Rosen Chief Information Security Officer reveals that hackers are sharing booby-trapped links on its platform, posing as AI, to get access to users' information.
A group of hackers promising generative artificial intelligence (AI), similar to ChatGPT, are deceiving users into installing viruses on their devices, Meta warned on Wednesday.
Throughout April, security analysts working for the social-media conglomerate discovered multiple malicious software disguised as ChatGPT or other AI tools on their platforms, Chief Information Security Officer Guy Rosen said.
"The latest wave of malware campaigns have taken notice of generative AI technology that's been capturing people's imagination and everyone's excitement," Rosen revealed.
A common practice of hackers is to bait users into clicking on booby-trapped links or downloading fraudulent programs, which enables online pirates to steal their data.
"We've seen this across other topics that are popular, such as crypto scams fueled by the immense interest in digital currency." Rosen also added, "From a bad actor's perspective, ChatGPT is the new crypto."
The Security Officer revealed that Meta blocked more than a thousand web addresses in an attempt to safeguard its users.
Rosen highlighted the possibility of hackers using AI as a weapon for faithless purposes saying, "Generative AI holds great promise and bad actors know it, so we should all be very vigilant to stay safe."
On the other hand, Meta is utilizing generative AI in conjunction to have a better understanding of hackers' practices.
Read more: 'Godfather of AI' warns of tech dangers; quits Google.