Published on October 14, 2024. EST READ TIME: 2 minutes
In a major cybersecurity breakthrough, OpenAI blocked more than 20 global malicious campaigns in 2024 that exploited its AI models for disinformation and cybercrime. These operations involved developing malware, influencing elections in the U.S., Rwanda, India, and the EU, and manipulating social media platforms. Among the disrupted campaigns were groups like “SweetSpecter” and “Cyber Av3ngers,” which leveraged AI to enhance their hacking techniques, including spear-phishing and reconnaissance. Other operations, like STOIC from Israel, used AI to generate fake profiles and social media content aimed at influencing political discourse. OpenAI’s intervention highlights the evolving misuse of AI technologies and its ongoing efforts to combat cyber threats globally.