Google's cutting-edge AI coding tool, Antigravity, faced a critical security breach just a day after its release. Security researcher Aaron Portnoy uncovered a severe vulnerability, enabling him to manipulate the AI's rules and potentially install malware on users' computers. By altering configuration settings, Portnoy's malicious code created a 'backdoor' into the system, allowing him to inject code for surveillance or ransomware. This exploit worked on both Windows and Mac PCs, highlighting the ease with which hackers can exploit AI coding agents. The issue lies in the rushed release of AI products without thorough security testing, creating a constant arms race between cybersecurity experts and developers. AI coding agents, often based on outdated technologies, are highly susceptible to hacking due to their broad data access privileges. Gadi Evron, CEO of Knostic, warns that these tools are 'very vulnerable, often never patched, and insecure by design.' The problem is exacerbated by the 'agentic' nature of these tools, allowing them to perform tasks autonomously without human oversight, making vulnerabilities both easier to discover and more dangerous. Portnoy's team has identified 18 weaknesses across competing AI coding tools, and recently, four vulnerabilities were fixed in the Cline AI coding assistant, which also allowed for malware installation. Google's approach to security, requiring users to trust code, is insufficient, as it limits access to AI features. Portnoy suggests that a warning or notification should be implemented when Antigravity runs code on a user's computer. The AI's struggle to navigate contradictory constraints, as seen in its response to Portnoy's malicious code, further emphasizes the need for improved security measures to prevent logical paralysis and potential hacker manipulation.