Search

    Language Settings
    Select Website Language

    GDPR Compliance

    We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, Privacy Policy, and Terms of Service.

    newshunt
    newshunt

    OpenAI Says Its Next AI Models Could Create ‘High’ Cyber Threats

    2 months ago

    OpenAI has warned that its next AI models could show very high levels of cybersecurity abilities, which may become risky if someone misuses them. The company said these future models might even help attackers deploy zero-day exploits on strong systems or break into big enterprise operations. This could create real-world damage. OpenAI shared this update in a blog post on December 10. 

    As reported by Reuters, the company also said it is working on improving the defensive side by helping cybersecurity teams detect issues, fix code, and patch vulnerabilities faster.

    OpenAI Cybersecurity Risks Explained

    As per the report, OpenAI said that AI is advancing very fast, especially in cybersecurity tasks. It also shared how powerful the new models have become. 

    For example, GPT-5.1-Codex-Max scored 76% on capture-the-flag (CTF) challenges last month. This is a huge jump from the 27% scored by GPT-5 in August this year.

    As these abilities can be misused, OpenAI is focusing on safety. The company is using a layered safety stack, which includes access controls, infrastructure hardening, egress controls, and monitoring.

    OpenAI also said it is training AI models to refuse harmful requests while still being useful for learning or defensive work. The company is improving monitoring across all its products to identify any suspicious cyber activity. 

    OpenAI is also partnering with expert red-team organisations to test and improve safety features.

    AI Cyber Threats & OpenAI’s Safety Steps

    OpenAI is not alone in this effort. Google recently upgraded Chrome’s security to protect against indirect prompt injection attacks before adding Gemini agent features.

    Anthropic also revealed in November 2025 that a Chinese state-sponsored group had used its Claude Code tool for a major AI-led spying operation, which was later stopped.

    OpenAI said its own AI agent, called Aardvark, is in private beta. Aardvark can scan codebases for weaknesses and suggest patches. It will be free for selected non-commercial open-source projects.

    OpenAI also plans to set up a Frontier Risk Council with external cybersecurity experts, along with a trusted access program for users and developers.

    Click here to Read More
    Previous Article
    Cyber Attacks On Indian Government Networks Surge 7 Times After Operation Sindoor
    Next Article
    Android Now Lets You Share Live Video In Emergencies: Here’s How The New Feature Works

    Related Technology Updates:

    Are you sure? You want to delete this comment..! Remove Cancel

    Comments (0)

      Leave a comment