Close Menu
Innovation Village | Technology, Product Reviews, Business
    Facebook X (Twitter) Instagram
    Thursday, September 18
    • About us
      • Authors
    • Contact us
    • Privacy policy
    • Terms of use
    • Advertise
    • Newsletter
    • Post a Job
    • Partners
    Facebook X (Twitter) LinkedIn YouTube WhatsApp
    Innovation Village | Technology, Product Reviews, Business
    • Home
    • Innovation
      • Products
      • Technology
      • Internet of Things
    • Business
      • Agritech
      • Fintech
      • Healthtech
      • Investments
        • Cryptocurrency
      • People
      • Startups
      • Women In Tech
    • Media
      • Entertainment
      • Gaming
    • Reviews
      • Gadgets
      • Apps
      • How To
    • Giveaways
    • Jobs
    Innovation Village | Technology, Product Reviews, Business
    You are at:Home»Artificial Intelligence»OpenAI Fixes Security Hole That Let ChatGPT Be Tricked Into Gmail Data Theft
    OpenAI
    OpenAI

    OpenAI Fixes Security Hole That Let ChatGPT Be Tricked Into Gmail Data Theft

    0
    By Smart Megwai on September 18, 2025 Artificial Intelligence, chatbot, Cybersecurity, Internet

    Radware’s cybersecurity team’s discovery of a flaw in ChatGPT was more than a routine code check; they called the process a “rollercoaster” that ended with a startling breakthrough. They revealed a method hackers could have used to silently steal private data directly from users’ Gmail accounts.

    The vulnerability was centred on ChatGPT’s “deep research” feature, which lets the AI access a user’s personal cloud services to summarise information. But Radware proved this helpful tool could also become a gateway for attackers.

    The attack worked using a deceptive phishing email that looked like a standard HR notification. Hidden within this email was a set of malicious commands. When a user asked ChatGPT to research their Gmail inbox, the AI would scan the compromised email and unknowingly follow the hacker’s instructions. This would trigger the extraction of sensitive data such as names, addresses, and contacts, which the AI would then send to a website the attacker controlled. The entire process was invisible to the victim.

    We got ChatGPT to leak your private email data 💀💀

    All you need? The victim's email address. ⛓️‍💥🚩📧

    On Wednesday, @OpenAI added full support for MCP (Model Context Protocol) tools in ChatGPT. Allowing ChatGPT to connect and read your Gmail, Calendar, Sharepoint, Notion,… pic.twitter.com/E5VuhZp2u2

    — Eito Miyamura | 🇯🇵🇬🇧 (@Eito_Miyamura) September 12, 2025

    Radware emphasised that this was a sophisticated attack, requiring deep knowledge of how AI models process commands. But the real danger was its invisibility. Standard corporate security tools could not have detected the breach because the data wasn’t stolen from the user’s computer; it was stolen directly from OpenAI’s servers.

    This elevated the flaw from a simple bug to a chilling proof of concept, showing how attackers can weaponise an AI’s own capabilities. This isn’t an isolated incident. Last month, both Anthropic and Brave warned that “prompt injection” attacks pose a similar threat to AI-driven browsers and extensions.

    To its credit, OpenAI acted swiftly. It fixed the vulnerability in August and disclosed it in September, stating that safety is a top priority. The company added that it works continuously to reduce malicious use and improve its safeguards.

    Still, the incident is a stark reminder that as AI becomes more integrated into our daily tools, it creates new opportunities for cybercriminals. Where they once needed to deploy bulky malware, they can now achieve the same goal with a single, hidden command in an email.

    This timing is critical because OpenAI isn’t alone in facing this threat. As industries everywhere rapidly adopt AI, experts warn about a new type of attack where the AI itself becomes the weapon. Radware recommends that companies “sanitise” content before feeding it to AI models and implement better oversight to spot abnormal data transfers.

    For everyday users, this calls for caution, not fear. AI tools are becoming incredibly capable, but they are not foolproof. Letting ChatGPT access your private files offers great convenience, but it also introduces real security risks—and hackers are ready to take advantage.

    OpenAI may have fixed this particular security hole, but Radware’s experiment proves one thing: the contest between those building AI and those seeking to exploit it is just getting started.

    Related

    AI models Artificial Inrtelligence Brave ChatGPT ChatGPT Deep Research Radware Technology
    Share. Facebook Twitter Pinterest LinkedIn Email
    Smart Megwai
    • Facebook
    • X (Twitter)
    • Instagram
    • LinkedIn

    Smart is a Tech Writer. His passion for educating people is what drives him to provide practical tech solutions which helps solve everyday tech-related issues.

    Related Posts

    Crypto Giant Blockchain.com Establishes Physical Presence in Nigeria Amid Growing Adoption

    LASTMA Launches Drone Technology to Monitor Traffic Congestion in Lagos

    Google Launches AI Agent Payment Protocol with Coinbase, Mastercard, and PayPal

    Leave A Reply Cancel Reply

    You must be logged in to post a comment.

    Copyright ©, 2013-2024 Innovation-Village.com. All Rights Reserved

    Type above and press Enter to search. Press Esc to cancel.