Close Menu
Innovation Village | Technology, Product Reviews, Business
    Facebook X (Twitter) Instagram
    Tuesday, December 9
    • About us
      • Authors
    • Contact us
    • Privacy policy
    • Terms of use
    • Advertise
    • Newsletter
    • Post a Job
    • Partners
    Facebook X (Twitter) LinkedIn YouTube WhatsApp
    Innovation Village | Technology, Product Reviews, Business
    • Home
    • Innovation
      • Products
      • Technology
      • Internet of Things
    • Business
      • Agritech
      • Fintech
      • Healthtech
      • Investments
        • Cryptocurrency
      • People
      • Startups
      • Women In Tech
    • Media
      • Entertainment
      • Gaming
    • Reviews
      • Gadgets
      • Apps
      • How To
    • Giveaways
    • Jobs
    Innovation Village | Technology, Product Reviews, Business
    You are at:Home»Artificial Intelligence»NITDA Issues Urgent Alert: Nigerians Warned of Critical ChatGPT Vulnerabilities Exposing Users to Data Leakage
    ChatGPT

    NITDA Issues Urgent Alert: Nigerians Warned of Critical ChatGPT Vulnerabilities Exposing Users to Data Leakage

    0
    By Toluwanimi Adejumo on December 8, 2025 Artificial Intelligence

    The National Information Technology Development Agency (NITDA) has issued a critical advisory warning Nigerian individuals and businesses against potential security risks associated with the popular AI platform, ChatGPT. The agency raised alarms over newly discovered vulnerabilities that could allow malicious actors to hijack user conversations, steal sensitive data, and manipulate chat histories without the user’s knowledge.

    In a press statement released on Sunday in Abuja, the agency highlighted that cybercriminals are increasingly exploiting gaps in Large Language Models (LLMs) to conduct “data-leakage attacks.” These vulnerabilities, according to NITDA, jeopardize not just personal privacy but also the intellectual property of Nigerian startups and corporate entities that rely on these tools for daily operations.

    The “HackedGPT” Threat

    The advisory appears to reference a series of security flaws recently identified by global cybersecurity researchers, often categorized under the moniker “HackedGPT.” These vulnerabilities effectively turn the AI chatbot into a “confused deputy,” forcing it to perform actions against the user’s will.

    NITDA’s Computer Emergency Readiness and Response Team (NITDA-CERRT) outlined two primary attack vectors:

    Indirect Prompt Injection: Attackers can embed invisible, malicious commands into websites or documents that a user asks ChatGPT to summarize or analyze. Once the AI processes this tainted content, it can be tricked into secretly sending the user’s private chat history to a third-party server controlled by the attacker.

    Persistent Memory Injection: This more insidious flaw exploits ChatGPT’s “memory” feature. A malicious prompt can plant false instructions in the AI’s long-term memory. For example, an attacker could trick the AI into appending a phishing link to every future response it generates for the user, creating a persistent security hazard that lasts across multiple sessions.

    “The convenience of AI should not come at the cost of data sovereignty,” the statement read. “We have observed a trend where malicious instructions hidden in seemingly harmless web pages can trick AI models into exfiltrating sensitive user data. Nigerians must understand that an AI interface is not a private vault.”

    Risks to Nigeria’s Digital Economy

    The warning comes at a pivotal time as Nigeria cements its status as Africa’s digital hub. With the proliferation of tech startups and the integration of AI into sectors like banking and fintech, the attack surface for such vulnerabilities has widened.

    Director General of NITDA, Kashifu Inuwa Abdullahi, has frequently emphasized the “shared responsibility” of cybersecurity. While today’s specific warning focuses on ChatGPT, it reflects a broader concern regarding the rapid adoption of AI tools without adequate security “guardrails.”

    “Many employees act under the assumption that their conversations with AI are ephemeral and private,” said a cybersecurity analyst at the Lagos-based firm, CyberSafe Foundation. “NITDA’s warning is timely because it challenges that assumption. If you paste proprietary code or customer data into a chatbot that has been compromised by a memory injection attack, that data is effectively in the wild.”

    NITDA’s Recommendations

    To mitigate these risks, NITDA has advised Nigerians to adopt the following precautionary measures immediately:

    Sanitize Inputs: Avoid entering Personally Identifiable Information (PII), passwords, financial details, or trade secrets into ChatGPT or any public generative AI tool.

    Verify Sources: Be cautious when asking AI tools to summarize content from untrusted websites, as these sites may contain hidden “prompt injection” commands.

    Disable “Memory” Features: For users handling sensitive tasks, NITDA recommends disabling the “Memory” or “Chat History” features in settings where possible, or regularly clearing stored memories to flush out potential malicious instructions.

    Corporate Governance: Businesses are urged to implement strict “Acceptable Use Policies” (AUP) regarding AI, explicitly defining what data is classified and forbidden from being processed by public AI models.

    Global Context

    NITDA’s alert mirrors similar warnings issued by cybersecurity agencies in Europe and North America following the disclosure of the “HackedGPT” flaws by Tenable Research in late 2025. These findings demonstrated how attackers could use the AI’s own voice functionality to speak out stolen data or manipulate users into clicking fraudulent links.

    The agency concluded its statement by reassuring the public that it is working with international partners and AI vendors to ensure these vulnerabilities are patched. However, until comprehensive fixes are deployed globally, “hyper-vigilance” remains the best defense.

    “Innovation must be matched with security,” the statement concluded. “We urge all Nigerians to ‘Think Before You Prompt.'”

    Related

    artificial intelligence ChatGPT NITDA
    Share. Facebook Twitter Pinterest LinkedIn Email
    Toluwanimi Adejumo

    Toluwanimi Adejumo Holds a BSc in Mass Communication and Certification in Content writing and Digital marketing. He is a Content Writer and Social Media manager, He loves writing on information and Communication Technology Sector, Cryptocurrency, Remote work, Health Technology and Sports.

    Related Posts

    Why Moniepoint and Sun King Are the New Model for 2026: Focus on Resilience, Not Just Growth

    Meta Acquires Limitless to Power Next-Gen AI Wearables

    Galatasaray Fans Use AI to Create Viral Anthem “King of Aslan Osimhen” Celebrating African Superstar

    Leave A Reply Cancel Reply

    You must be logged in to post a comment.

    Copyright ©, 2013-2024 Innovation-Village.com. All Rights Reserved

    Type above and press Enter to search. Press Esc to cancel.