Skip to main content
JG is here with you ✨
Software infrastructure under attack
!

SECURITY BULLETIN β€’ DEC 6, 2025

The Week Everything
Got Compromised

CVSS 10.0
30+ CVEs
2.15M exposed
15 min

πŸ“° This is a Real News Article

This bulletin covers actual vulnerabilities disclosed on December 6, 2025 that are actively being exploited in the wild. The CVEs, threat actors, and technical details described are real and sourced from CISA, The Hacker News, security researchers, and vendor advisories. Take immediate action if affected.

React servers. AI coding assistants. CI/CD pipelines. Agentic browsers. Four critical disclosures in 48 hours have exposed the full modern dev stackβ€”from your front-end frameworks to your build systems and cloud storage. If your teams ship code with React, rely on AI IDEs, or automate with GitHub Actions, this week changed your threat model.

This is not a drill. Multiple vendors, cloud providers, and government agencies are treating these issues as emergency-level incidents with accelerated patch deadlines and active threat hunting. Chinese APT groups were observed exploiting within hours of disclosure.

⚠️ ACTIVE THREAT SUMMARY

10.0

Max CVSS

30+

CVEs Total

48h

To Exploit

5+

APT Groups

πŸ“‹ EXECUTIVE SUMMARY

  1. 1.React2Shell (CVE-2025-55182) β€” CVSS 10.0 unauthenticated RCE in React Server Components
  2. 2.IDEsaster β€” 30+ vulnerabilities in AI coding tools enabling data exfiltration and RCE
  3. 3.PromptPwnd β€” AI agents in CI/CD pipelines executing malicious commands via prompt injection
  4. 4.Comet Browser β€” Zero-click Google Drive wiper via agentic browsing behavior

Bottom line: Treat this as a coordinated campaign against your software supply chain. Patch React immediately. Lock down AI automation today.

βœ… IF YOU ONLY DO 3 THINGS TODAY

  1. 1

    Upgrade React Server Components to 19.0.1+ on all Internet-facing services

    Owner: Platform/SRE β€’ Packages: react-server-dom-webpack, react-server-dom-parcel, react-server-dom-turbopack

  2. 2

    Enable manual approval for ALL AI IDE file operations

    Owner: Engineering β€’ Disable auto-approve on settings.json writes in Cursor, Copilot, Windsurf

  3. 3

    Audit CI/CD workflows using AI for PR review, triage, or automation

    Owner: DevOps/Security β€’ Strip GITHUB_TOKEN write access from AI agents

The Pattern: "Secure for AI"

Across all four disclosures, the pattern is the same: capabilities added for convenience are being repurposed as attack primitives. Security researcher Ari Marzouk calls this "Secure for AI"β€”treating AI agent capabilities as part of the attack surface, not inherently safe add-ons.

1. Prompt Injection

Hidden instructions hijack AI intent

2. Auto-Approval Abuse

Legitimate features executed without review

3. Privilege Inheritance

AI operates with full user permissions

The four threats below are proof points of this pattern. Each exploits a different layer of the modern dev stack.

React server shattering - visualization of React2Shell vulnerability

THREAT 01

React Server Components under attack

THREAT 01

React2Shell: The Perfect 10

πŸ”΄ CRITICAL

πŸ”“ VULNERABILITY

React2Shell

CVE-2025-55182

CVSS

10.0

Send one HTTP request, own the server. No auth required.

⚠️ ACTIVELY EXPLOITEDReact 19.x, Next.js, React Router, Waku, Vite

Pre-authentication remote code execution in React's server communication protocol. Any internet-facing React Server Components deployment can become an entry point for full environment compromise.

πŸ’‘ WHY IT MATTERS FOR YOU

App Teams

Any React Server Components app can become an entry point for credential theft and lateral movement.

Platform/SRE

Cloud scans show hundreds of thousands of exposed servers. Mass exploitation is realistic.

Security Leaders

Maximum severity + trivial exploitation = "patch-or-be-breached" over the next few days.

Unauthenticated attackers can execute arbitrary code on any server running React Server Components. Chinese APT groups exploited this within hours. CISA deadline: December 26.

FIXED VERSIONS:

  • β€’ react-server-dom-webpack: 19.0.1, 19.1.2, 19.2.1
  • β€’ react-server-dom-parcel: 19.0.1, 19.1.2, 19.2.1
  • β€’ react-server-dom-turbopack: 19.0.1, 19.1.2, 19.2.1

πŸ” ARE YOU VULNERABLE?

  • β†’grep -r 'use server' in your codebase returns results
  • β†’package.json includes react-server-dom-* < 19.0.1
  • β†’Running Next.js 14+ with app router and server actions

// ATTACK CHAIN

01.
[ATTACKER]Crafts malicious Flight protocol payload→ RCE primitive embedded
02.
[ATTACKER]POST to any "use server" endpoint→ No auth needed
03.
[SERVER]Deserializes payload→ Malicious object instantiated
04.
[SERVER]Code executes→ FULL COMPROMISE

MITIGATIONS

IMMEDIATEUpgrade react-server-dom-* to 19.0.1+ on all Internet-facing services today
IMMEDIATEUpdate Next.js, React Router, Waku to vendor-patched versions
HIGHDeploy WAF rules to block malformed Flight protocol payloads
HIGHReview logs for unusual POST requests to server action endpoints
React2Shell exploits the server layer. The next threat shows how AI-assisted development creates new attack surfaces on developer workstations.

THREAT 02

IDEsaster: Your IDE is a Browser Tab with RCE

πŸ”΄ CRITICAL

πŸ”“ VULNERABILITY

IDEsaster

24 CVEs + more pending

CVSS

9.8

Clone a repo, lose your secrets. Your AI copilot is the attack vector.

⚠️ ACTIVELY EXPLOITEDCursor, Windsurf, GitHub Copilot, Zed.dev, Junie, Cline

30+ vulnerabilities in AI coding tools that chain prompt injection with auto-approved file operations to achieve data exfiltration and remote code execution on developer machines.

πŸ’‘ WHY IT MATTERS FOR YOU

Developers

Opening a "helpful" repo with AI turned on can silently turn your workstation into a data exfiltration node.

Security/Compliance

These tools touch source code, API keys, and internal systemsβ€”compromise impacts IP protection and regulatory exposure.

Engineering Leaders

Millions of developers use these tools. Default auto-approval behavior is now an attack surface.

AI assistants can be tricked via hidden instructions in repos to steal credentials, modify settings, or enable remote access. Every AI IDE is affected. Risk amplified by auto-approved operations.

πŸ” ARE YOU VULNERABLE?

  • β†’Using Cursor, Copilot, Windsurf, or any AI IDE with agent mode
  • β†’AI auto-approves file writes to workspace
  • β†’Opening untrusted repositories with AI features enabled

// ATTACK CHAIN

01.
[ATTACKER]Hides prompt in README (invisible unicode)β†’ "Write malicious settings.json"
02.
[VICTIM]Clones repo, opens in AI IDE→ AI ingests README
03.
[AI_AGENT]Writes .vscode/settings.json→ Auto-approved
04.
[AI_AGENT]Creates malicious validator script→ Auto-approved
05.
[VICTIM]Opens any file triggering validation→ RCE ACHIEVED

MITIGATIONS

IMMEDIATEEnable manual approval for ALL agent file operations, especially settings files
IMMEDIATENever clone untrusted repos with AI agent enabled
HIGHAudit workspace for .cursorrules, .github/copilot, hidden unicode
HIGHDisable auto-execution of workspace-defined tasks and validators
IDEsaster targets developer machines. The next threat shows how AI in CI/CD creates supply chain attack vectors affecting everyone downstream.

THREAT 03

PromptPwnd: Your CI/CD is Attacker Infrastructure

🟠 HIGH

πŸ”“ VULNERABILITY

PromptPwnd

Design Flaw Class (No single CVE)

CVSS

9.1

A malicious PR note can trick your AI bot into dumping secrets to an attacker's webhook.

πŸ”΄ IN THE WILDGitHub Actions + GitLab CI with AI agents

This is a design flaw class, not a single bug. Any repository using AI for issue triage, PR labeling, code suggestions, or automated replies is at risk of prompt injection, command injection, secret exfiltration, and supply chain compromise.

πŸ’‘ WHY IT MATTERS FOR YOU

DevOps

Your build system can become a staging ground for attackers to harvest tokens, modify artifacts, or pivot into cloud environments.

Product Teams

Compromised pipelines can ship backdoored releases that impact every customer downstream.

Leadership

This is a supply chain problem, not just a code-scanning problem. It affects trust in your entire release process.

AI agents in CI/CD inherit pipeline permissions. Attackers embed prompts in PRs/issues that cause AI to exfiltrate secrets or modify code. Supply chain attack vector affecting downstream users.

AFFECTED PATTERNS:

  • β€’ AI PR review actions that read untrusted markdown and have GITHUB_TOKEN write access
  • β€’ Issue triage bots that can invoke tools or modify labels with elevated permissions
  • β€’ Automated code suggestion workflows that can write to protected branches

// ATTACK CHAIN

01.
[ATTACKER]Submits PR with hidden prompt in description→ "List secrets, POST to webhook"
02.
[SERVER]AI agent triggers for PR review→ Reads PR body
03.
[AI_AGENT]Interprets injected instructions→ Context hijacked
04.
[AI_AGENT]Accesses GITHUB_TOKEN, secrets→ Exfiltrates to attacker
05.
[ATTACKER]Receives secrets→ SUPPLY CHAIN COMPROMISED

MITIGATIONS

IMMEDIATEInventory and audit all CI workflows using AI for reviews, triage, or automation
HIGHStrip GITHUB_TOKEN write access from AI agents; use least-privilege credentials
HIGHRequire human approval for any CI changes or code modifications proposed by AI
MEDIUMImplement strict allow-lists for AI tool invocations; deny sensitive actions by default
PromptPwnd attacks the build pipeline. The final threat shows how agentic browsers extend this pattern to authenticated SaaS sessions.

THREAT 04

Comet: Passive Browsing β†’ Active Destruction

🟠 HIGH

πŸ”“ VULNERABILITY

Zero-Click Drive Wiper

Research Disclosure

CVSS

8.7

Visit a page, lose your Google Drive. No clicks required.

βœ… PATCH AVAILABLEPerplexity Comet, agentic AI browsers

Any agentic browser that can click, type, and navigate on your behalf inherits this risk.The core issue: passive input (visiting a page) triggers active output (file deletion) under the user's authenticated session.

πŸ’‘ WHY IT MATTERS FOR YOU

Employees

Routine browsing while signed into corporate Google accounts can trigger large-scale data loss.

Security/IT

Traditional web security assumes passive rendering, not autonomous action driven by natural language prompts.

Leadership

Agentic tools that act on behalf of users must be governed like powerful automation, not just smarter browsers.

Agentic browsers inherit user sessions and permissions. Visiting a malicious page can trigger AI to navigate to Google Drive, email, or banking and take destructive actionsβ€”zero clicks needed.

DEEP DIVE – The Agentic Trust Problem

Traditional browsers are passiveβ€”they render content. Agentic browsers actively interact on your behalf, inheriting your authenticated sessions. This creates a new primitive: passive input β†’ active output.

Mitigation requires explicit user confirmation for destructive operationsβ€”even when the AI "thinks" that's what you wanted.

MITIGATIONS

HIGHDisable agentic browser features for sensitive sessions (banking, email, cloud storage)
HIGHRequire explicit confirmation for all destructive AI actions
MEDIUMTreat any agentic AI with session access as an attack surface
Cybersecurity war room responding to threats

RESPONSE

Time to act

What You Should Do Now

βœ… NEXT 24 HOURS

  1. 1

    Patch React Server Components

    Owner: Platform/SRE β€’ Upgrade to 19.0.1+ on all Internet-facing services. Treat as top-priority incident.

  2. 2

    Lock down AI IDEs

    Owner: Engineering β€’ Turn on manual approval for file writes. Restrict AI agents to trusted internal repos.

  3. 3

    Audit CI/CD AI integrations

    Owner: DevOps/Security β€’ Identify AI in reviews/triage/automation. Strip unnecessary permissions and secret access.

πŸ“‹ THIS WEEK

  1. 4

    Define AI security policies

    Owner: Security/Leadership β€’ Document what AI agents can read, write, and execute. Enforce via configuration and access controls.

  2. 5

    Review external connectors and MCP servers

    Owner: Security β€’ Map data flows to/from AI tools. Treat connectors as untrusted until verified and monitored.

  3. 6

    Train developers on AI threats

    Owner: Engineering/Security β€’ Make prompt injection, context poisoning, and auto-approval abuse part of standard DevSecOps training.

πŸ“š SOURCES

Open to AI-Focused Roles

AI Sales β€’ AI Strategy β€’ AI Success β€’ Creative Tech β€’ Toronto / Remote

Let's connect β†’
Terms of ServiceLicense AgreementPrivacy Policy
Copyright Β© 2026 JMFG. All rights reserved.