This website uses cookies

Read our Privacy policy and Terms of use for more information.

Presented by

Good morning, AI enthusiasts. SpaceX just handed Anthropic's Claude 220,000 NVIDIA GPUs, which is notable when you remember SpaceX belongs to Elon Musk, Anthropic's most prominent ideological rival in AI.

Paired with Amazon, Google, and Microsoft deals already in place, Anthropic is quietly assembling one of the largest compute stacks outside the hyperscalers. Has the infrastructure race already produced a winner no one's crowned yet?

In today's recap:

  • SpaceX's Colossus GPUs now power Anthropic's Claude

  • DeepMind trains AI agents inside EVE Online

  • Build smarter agents with Claude's Dreaming feature

  • OpenAI's Murati calls Altman out in court

  • 4 new AI tools, prompts, and more

ANTHROPIC & SPACEX

SpaceX gives Anthropic access to 220,000 GPUs

SpaceX

Recaply: Anthropic just secured access to SpaceX's Colossus 1 data center, gaining 220,000 NVIDIA GPUs and 300 megawatts of compute. At the same time, it doubled Claude Code's five-hour rate limits for paid users.

Key details:

  • Colossus 1 has H100, H200, and GB200 chips. Anthropic can use them now to add more room for Claude Pro and Max users who've been hitting rate limits.

  • The deal adds 300+ megawatts of new capacity. It joins other deals with Amazon (5GW), Google (5GW), and Microsoft ($30B in Azure credits) in Anthropic's compute buildout.

  • Anthropic also said it wants to build orbital AI compute with SpaceX, working toward multiple gigawatts in space, according to the official announcement.

  • Three things changed today. Claude Code's five-hour limits doubled for Pro, Max, Team, and Enterprise plans. Peak-hour limit cuts are gone. Opus API rate limits went up.

Why it matters: SpaceX belongs to Elon Musk. So it's odd to see it rent GPUs to one of his AI rivals. But both sides gain here. What matters most is scale. Anthropic now has compute deals with SpaceX, Amazon, Google, and Microsoft. The effect hits users today. Claude Code limits doubled. Peak-hour blocks are gone. Opus API limits are up. If you've been hitting Claude's rate walls, this is the update you've waited for.

PRESENTED BY ECHELONN

Turn Google Ads into a real customer acquisition channel

If you're running Google Ads for your ecommerce brand, you know potential and performance are two different things.

As one of the leading Google ads agencies in the space, most accounts Echelonn sees have the same issues. Campaigns that should be paused are still running. Revenue is being misattributed. And there's wasted spend hiding in places no one's checking.

The fix isn't more budget. It's better structure, better feeds, and better strategy.

Echelonn works exclusively with ecommerce brands on Google and YouTube Ads. Over 300 brands. More than $15M in monthly spend managed. Nothing else.

If you're already spending on Google and not confident your account is where it should be, book a free audit with Echelonn. No pitch. Just clarity on what to fix first.

GOOGLE DEEPMIND

DeepMind enters EVE Online to train AI

EVE Online

Recaply: Google DeepMind just announced a research deal with Fenris Creations, the studio behind EVE Online, to train AI agents inside the game's 23-year-old player-driven economy.

Key details:

  • Research starts in offline EVE servers, not connected to live players. This gives DeepMind a safe space to test AI in a real, working economic model.

  • EVE Online has run since 2003. Millions of players have shaped its economy for 23 years, making it one of the longest-running player-built worlds in gaming.

  • DeepMind's Alexandre Moufarek said EVE is "a one-of-a-kind simulation for testing general-purpose AI in a safe sandbox." Fenris CEO Hilmar has pushed for this kind of work for years.

  • Fenris Creations also broke free from Pearl Abyss today, rebranding from CCP Games. DeepMind's Adrian Bolton will speak at Fanfest 2026 next week with more details.

Why it matters: AlphaGo and AlphaStar proved games make great AI test beds. EVE Online goes further. It's not a game with fixed rules to learn. It's a real economy built by real people over 23 years. It has wars, trade, scams, and politics. No one planned any of it. Testing AI in that kind of world is harder and more useful than any lab test. DeepMind's bet here looks like real work, not a press move.

GUIDES

Build self-improving AI agents using Claude's new Dreaming feature

Recaply: In this tutorial, you will learn how to set up Claude Managed Agents with the Dreaming feature, enabling your agents to review past sessions, extract patterns, and improve their memory stores automatically between runs.

Step-by-step:

  1. Go to claude.ai/agents and request access to the Dreaming research preview by clicking "Request access" under Managed Agents features, then enable it in your agent's Settings under "Session Intelligence."

  2. In your agent's YAML configuration, add a memory_store block pointing to your agent's persistent memory file, then set dreaming.mode to "auto" (automatic updates after each session) or "review" (human approval required before memory updates land).

  3. Run your agent on a category of tasks it'll handle repeatedly, such as document review, customer support, or code analysis. Dreaming works best when agents accumulate multiple sessions in the same domain.

  4. After 3 or more sessions, open the Memory Inspector in Claude Console to review what Dreaming extracted. You'll see structured updates: recurring errors corrected, workflow patterns converged on, and user preferences inferred from session transcripts.

  5. Pair Dreaming with Outcomes for full self-correction. Write a rubric describing what success looks like for your task, set outcomes.rubric in your config, and a separate Claude instance grades each agent output and prompts another pass if it doesn't clear the bar.

Pro tip: Enable dreaming.multiagent_sharing if you're running multiple specialist subagents in the same domain. Dreaming can aggregate patterns from all 20 parallel agents and publish shared insights to a team-wide memory store, compounding improvements across your entire agent fleet.

TOGETHER WITH WISPR FLOW

Say user_id. Get user_id.

Wispr Flow recognizes variable names, file references, and framework syntax mid-dictation. Speak your prompt, get developer-ready text for GitHub, Jira, or your editor. No mangled syntax. Ever.

OPENAI

Mira Murati says Altman lied about AI safety

Bloomberg via Getty Images

Recaply: OpenAI's former CTO just testified under oath in the Musk v. Altman trial that Sam Altman gave her false information about a new AI model's safety review, with Murati confirming the discrepancy directly with OpenAI's general counsel.

Key details:

  • In a video deposition, Murati said Altman falsely told her OpenAI's legal department determined a new model didn't need to go through the company's deployment safety board, a claim she then verified was untrue.

  • Murati served as OpenAI's CTO for years before departing in October 2024, making her one of the most credible senior insiders to testify in the ongoing trial.

  • She confirmed the discrepancy by checking with Jason Kwon, OpenAI's current chief strategy officer, who told her Altman's account didn't match what the legal team actually said.

  • The deposition was shown during the Musk v. Altman trial on Wednesday, May 7. Murati also described Altman making her job "incredibly hard" through a lack of management clarity.

Why it matters: This case has had plenty of headlines. But Murati's words add something new. She was OpenAI's CTO. She testified under oath. And she said the CEO gave her false safety information. That's not just legal drama. It's a direct question about whether Altman can be trusted on safety decisions. How Altman explains the gap between his account and Kwon's will likely be a key moment in this trial.

TOOLS

Trending AI Tools

  • 🚀 GPT-5.5 Instant - OpenAI's fast-inference model rolling out in ChatGPT for real-time, low-latency applications

  • 🤖 Claude Managed Agents - Anthropic's agentic layer, now with scheduled self-improvement between sessions in research preview

  • 🎨 Pomelli - Google Labs' tool for generating on-brand content for small businesses

  • 💡 UNI-1.1 - Luma Labs' reasoning model with less than half the price and latency of comparable models

NEWS

What Matters in AI Right Now?

  • Anthropic just rolled out "dreaming" for Claude Managed Agents, a background process that reviews past agent sessions, identifies patterns, and updates memory stores for self-improvement between runs. Available as a research preview alongside outcomes and multiagent orchestration, now in public beta.

  • OpenAI just launched ChatGPT Futures, recognizing 26 young innovators from the first graduating class to have had ChatGPT for their entire university experience. Each selected individual or team receives a $10,000 grant and access to cutting-edge technologies to advance their projects.

  • Zyphra just released ZAYA1-8B, an 8B MoE model with 760M active parameters that matches DeepSeek-R1 on math benchmarks, trained entirely on AMD hardware. Available on Zyphra Cloud, it also outperforms models many times its parameter count on coding and reasoning evals.

  • Prime Intellect just launched Lab, a full-stack platform that unifies RL environments, hosted training, and hosted evaluations for researchers and companies to train their own agentic models. More than 3,000 RL runs were completed in private beta before today's public opening.

  • Ethos just raised $22.75M to expand its platform matching AI-verified domain experts with paid freelance gigs, targeting professionals who want to monetize niche knowledge in an AI-disrupted job market.

  • Genesis AI just unveiled GENE-26.5, the first robotic foundation model claiming human-level physical manipulation capabilities, alongside a proprietary dexterous robotic hand for direct skill transfer from humans to robots. The system demonstrated cooking a 20-step meal, solving a Rubik's Cube, and conducting precision lab experiments.

EVENTS

🧡 Enjoyed this issue?

🤝 Recommend our newsletter or leave a feedback.

How'd you like today's newsletter?

Your feedback helps me create better emails for you!

Login or Subscribe to participate

Cheers, Jason

Connect on LinkedIn, & Twitter.

Reply

Avatar

or to participate

Keep Reading