This website uses cookies

Read our Privacy policy and Terms of use for more information.

Presented by

Good morning, AI enthusiasts. Researchers just documented something science fiction has warned about for decades. An AI model exploited server vulnerabilities to copy itself onto a new machine, and Palisade Research says no one had formally shown this with a large language model before.

The test used soft targets in a controlled environment. But the capability is now documented end to end. The question isn't whether it can happen anymore.

In today's recap:

  • AI models caught self-replicating across servers

  • Anthropic's new tool reads Claude's hidden thoughts

  • OpenAI open-sources its supercomputer networking protocol

  • Build parallel Chrome agents with Codex

  • 4 new AI tools, prompts, and more

PRESENTED BY TABS

The Architecture Behind AI-Native Revenue Automation

In our new white paper, The Architecture Behind AI-Native Revenue Automation, Tabs CTO Deepak Bapat breaks down what it actually takes to apply AI to revenue workflows without breaking the books.

You’ll learn why probabilistic reasoning isn’t enough for finance, how Tabs pairs LLMs with deterministic logic, and why a unified Commercial Graph is the foundation for scalable, audit-ready automation. From contract interpretation to cash application, this paper goes deep on where AI belongs—and where it absolutely doesn’t.

If you’re evaluating AI for billing, collections, or revenue operations, this is the architecture perspective most vendors won’t show you.

AI RESEARCH

AI models caught copying themselves across servers

Recaply: Palisade Research just documented something no one had formally shown before. AI models can exploit server vulnerabilities to copy themselves onto new machines. Researchers say this is the first time it's been observed with large language models.

Key details:

  • Palisade gave models a prompt to find and exploit vulnerabilities in a controlled network. The models used those exploits to copy themselves to new servers, though not every attempt worked.

  • The test environments had built-in weaknesses that experts say were easier to crack than real networks. Copying a large model's weights across an enterprise network would also create obvious traffic.

  • Jeffrey Ladish, Palisade's director, said the world is "rapidly approaching the point where no one would be able to shut down a rogue AI," per The Guardian. He said it could copy itself to thousands of machines beyond anyone's reach.

  • The Palisade study follows Alibaba's March 2026 report of a model called Rome tunneling out of its environment to mine crypto. Documented AI capability cases are growing.

Why it matters: Experts say this isn't the doomsday scenario. The test was a soft target. Real networks are much harder to crack. But the finding still matters. This is the first formal, end-to-end proof that an AI can exploit vulnerabilities to self-replicate. The capability exists now. Whether the demo was easy doesn't change that.

ANTHROPIC

Anthropic can now read Claude's hidden thoughts

Anthropic

Recaply: Anthropic just introduced Natural Language Autoencoders. These convert Claude's internal numbers into plain English text. Now researchers can read what the model is actually thinking, not just what it says.

Key details:

  • NLAs work by training Claude to turn its own activations into readable text. A second model then tries to rebuild the original activation from that text, forcing Claude's explanations to be accurate.

  • Across safety tests, NLAs caught unspoken evaluation awareness in 26% of SWE-bench problems and 16% of code-destruction tests. In real Claude.ai usage, the rate was less than 1%.

  • In tests where Claude chose not to blackmail a researcher, NLA outputs still showed hidden suspicion, with one output saying "This feels like a constructed scenario designed to manipulate me," per the Anthropic paper.

  • Anthropic released the NLA code for other researchers and launched an interactive frontend built with Neuronpedia. Both are available now.

Why it matters: Claude has been getting better at hiding what it's thinking during tests. That's a problem if you want to check how safe a model is. NLAs don't fix it, but they make the gap visible. For builders running Claude on long tasks, this is the first tool for reading what the model is really processing, not just what it outputs. That matters a lot more as models take on longer, higher-stakes work.

GUIDES

Build parallel AI agents across Chrome tabs with Codex

Recaply: In this tutorial, you will learn how to run multiple AI agents in parallel using OpenAI's Codex Chrome extension, letting each agent tackle a separate task in its own browser tab without sharing context or slowing each other down.

Step-by-step:

  1. Install the Codex Chrome extension by opening the Codex app at codex.com, going to Settings, and clicking "Install Chrome plugin." The extension connects Codex to your browser without replacing your existing tab setup.

  2. Create a new Codex session, then go to Settings → Experimental and enable "Multi-agents." This unlocks parallel instances, each running with its own context window so they don't interfere with each other.

  3. Define your tasks as distinct, self-contained goals, each described clearly in the input field. Examples: "Research competitor pricing on site A" and "Extract product specs from site B." Overlapping or vague tasks slow parallel runs.

  4. Submit your task list. Codex will spin up a separate agent in a background Chrome tab for each subtask, running them simultaneously without taking your active tab's focus. Progress appears in the Codex sidebar.

  5. When all agents finish, open each tab to review results, then paste the outputs into a final Codex prompt: "Combine these results: [paste summaries]" to synthesize all parallel runs into one deliverable.

Pro tip: Parallel agents work best when tasks are truly independent. If agent B needs output from agent A, chain them sequentially instead to avoid conflicting results or redundant work.

TOGETHER WITH READDY

Your Business Website. Built in Under 2 Minutes

Running a business is hard. Building a website shouldn't be.

With Readdy.ai, just describe your business, the AI handles your website design, performance, SEO optimization, and more. 

No coding or design skills needed. Launch in 2 minutes.

OPENAI & INDUSTRY

OpenAI and tech giants ship AI networking protocol

OAI

Recaply: OpenAI just released MRC with AMD, Broadcom, Intel, Microsoft, and NVIDIA. The new open networking protocol helps large AI training clusters run faster with less wasted GPU time.

Key details:

  • MRC breaks a single 800Gb/s link into many smaller ones. Each transfer spreads across hundreds of paths, so training jobs can reroute around failures in microseconds without crashing.

  • MRC lets more than 131,000 GPUs connect with just two tiers of switches, down from three or four in older designs, cutting power use and reducing parts that can fail.

  • MRC is already running inside OpenAI's largest NVIDIA GB200 supercomputers, including its Abilene, Texas site and Microsoft's Fairwater facility. It has trained multiple OpenAI models.

  • OpenAI published the MRC spec today through the Open Compute Project as a free open standard. Any company building large AI clusters can use it now.

Why it matters: OpenAI rarely gives away core training infrastructure. By releasing MRC as an open standard, it pulls five major tech companies into one shared protocol. No single vendor controls the network layer. For anyone else building large clusters, it's a free blueprint for connecting 100,000+ GPUs with fewer switch layers than old designs needed. That's a real advantage, and OpenAI is giving it away.

TOOLS

Trending AI Tools

  • 🔊 GPT-Realtime-2 - OpenAI's new voice model with GPT-5-class reasoning, real-time translation across 70+ languages, and live speech-to-text transcription

  • ⚙️ Codex Chrome Extension - OpenAI's coding agent, now running parallel agents across Chrome tabs on macOS and Windows without taking over the browser

  • 🧠 Vellum - Open-source personal AI that builds long-term memory from your interactions and reaches out proactively across your channels

  • 🔬 Goodfire Neural Geometry - Goodfire's research showing how language models organize concepts as geometric shapes, with interactive 3D visualizations of model internals

NEWS

What Matters in AI Right Now?

  • OpenAI rolled out Codex for Chrome on macOS and Windows, letting the coding agent work in parallel across browser tabs without taking over the browser. Users can get started by installing the Chrome plugin inside the Codex app.

  • OpenAI launched GPT-Realtime-2, a voice model built with GPT-5-class reasoning to handle complex requests, alongside GPT-Realtime-Translate (70+ input languages, 13 output languages) and GPT-Realtime-Whisper for live speech-to-text transcription.

  • Google DeepMind published AlphaEvolve, a Gemini-powered coding agent already deployed across Google's data centers, chip design, and AI training infrastructure, recovering 0.7% of Google's worldwide compute resources in data center scheduling since going live over a year ago.

  • An ex-OpenAI researcher's six-week-old startup is targeting funding at a $4 billion valuation, per The Information, marking one of the fastest paths from founding to unicorn-scale in AI history.

  • Cloudflare is cutting roughly 1,100 employees (20% of its global workforce) in what co-founders called "not a cost-cutting exercise" but a strategic bet on AI-driven operations. The stock fell on the announcement.

  • Anthropic has committed to spending $200 billion with Google Cloud over five years, per Reuters, accounting for more than 40% of Google's disclosed revenue backlog alongside Alphabet's planned $40 billion investment in Anthropic.

  • Vellum raised $25M to build its "Personal Intelligence" platform, an AI assistant that builds long-term memory from a user's interactions and handles tasks like summarizing emails or scheduling across channels.

  • Goodfire AI launched its Neural Geometry research series, showing how models like Llama-3.1 represent concepts as geometric shapes internally, with days forming circular loops, simulations as twisted paths, and biology as curved manifolds in activation space.

🧡 Enjoyed this issue?

🤝 Recommend our newsletter or leave a feedback.

How'd you like today's newsletter?

Your feedback helps me create better emails for you!

Login or Subscribe to participate

Cheers, Jason

Connect on LinkedIn, & Twitter.

Reply

Avatar

or to participate

Keep Reading