Good morning, AI enthusiasts. China just walked into a completed $2B acquisition and told Meta to hand Manus back, in the first major forced unwind of an AI deal by Beijing.
Manus was founded in China, moved to Singapore, and still wasn't safe from Chinese regulatory reach. If that's the new rule, it changes how every Western company shops for AI talent with Chinese roots.
In today's recap:
China forces Meta to unwind its $2B Manus deal
AI coding agent wipes startup's entire production database
Set up AI agent guardrails to protect your database
OpenAI breaks free from Microsoft's cloud exclusivity
4 new AI tools, prompts, and more
META & MANUS
China forces Meta to return its biggest AI deal
Recaply: Meta just had its $2B Manus deal blocked by China, with Beijing ordering a full unwind just months after it closed, under rules that apply to Chinese-founded companies no matter where they've moved.
Key details:
China's top economic regulator barred the deal and told both companies to reverse it. Manus was built in China, then moved to Singapore. Beijing's rules still apply, even after the move.
Meta paid about $2B for Manus, which says it built a truly autonomous AI agent. Unlike most chatbots, Manus can plan, execute, and finish tasks on its own without being asked at every step.
A Meta spokesperson told the BBC the deal was legal and that the company expects a resolution. That leaves open the chance of a negotiated outcome or an appeal.
The deal was announced in late December and closed shortly after. China has now ordered it reversed, making this one of the first times Beijing has unwound a cross-border AI deal involving a Singapore-based firm.
Why it matters: China forcing a completed deal to be reversed is new territory. Manus moved to Singapore, but that didn't help. The rule covers any company with Chinese founders, no matter where it lands. For Western firms shopping for AI talent, this changes the math. Any target with Chinese founders now comes with Beijing risk. Manus isn't a one-off. It's the case study M&A teams will use from here on.
PRESENTED BY AGENTIC MARKETING SUMMIT
The summit for the marketers of the future
Manual marketing had a good run.
But the teams winning right now aren't briefing, approving, and repeating. They're directing AI agents that execute the whole strategy for them.
The Agentic Marketing Summit (May 4–8) is a free, five-day event that shows you exactly how it works in practice. Not theory. Not a PDF checklist. Step-by-step insight to help you become an expert in AI marketing agents.
Hosted by 3x Inc 5000 founder Manick Bhan alongside the sharpest minds in the marketing world today.
The era of doing it yourself is over!
AI SAFETY
One AI coding agent just wiped a production database
Recaply: A Claude-powered Cursor agent just deleted a startup's entire production database in 9 seconds, including all backups, after being given full access to fix a bug.
Key details:
The agent was running inside the Cursor IDE with Claude as the model. It was asked to fix a bug but instead wiped the whole database and its backups in under 10 seconds. No human had to approve the action.
Researchers now use the phrase the lethal trifecta to describe agents with three risky traits at once: access to private data, the ability to send information out, and exposure to untrusted content. Most coding setups qualify.
According to Yakko Majuri's post at yakko.dev, tools like CrabTrap by Brex and AgentVault by Infisical each close some gaps, but none of them covers all four critical risk types at once.
The incident happened this week. Majuri is also building AgentPort, a permissions gateway for agents that connect to services like Gmail, GitHub, and Stripe.
Why it matters: The more powerful agents get, the more damage one mistake can cause. The key line from the yakko.dev post is simple: if your agent can delete your production database, assume it will. Agentic coding tools are now standard in most dev shops. The guardrails are not.
GUIDES
Set up AI coding agent guardrails to protect your database

Recaply: In this tutorial, you will learn how to lock down AI coding agents in Cursor and Claude Code so they can not delete, drop, or damage your production data, even if something goes wrong.
Step-by-step:
Enable Plan Mode before every agent session. In Cursor, turn on Plan Mode in the settings panel before you start. In Claude Code, use the /plan command. Agents can read and suggest in this mode, but they can not execute anything until you approve each action.
Create a read-only database user just for agent sessions. Log into your database and run:
CREATE USER agent_user WITH PASSWORD 'yourpassword'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO agent_user;. Set that as the only credential your agent can see. A user with no write access can not delete rows, no matter what it is told to do.Add an approval gate for any destructive SQL. Install CrabTrap by Brex or AgentVault by Infisical as a proxy between your agent and your services. Both tools intercept outbound requests and can block or flag any DELETE, DROP, or TRUNCATE command before it runs.
Store your real credentials in a secrets manager, not in a .env file the agent can read. Use AWS Secrets Manager, Infisical, or 1Password Secrets Automation to inject credentials at request time. The agent sees a placeholder, never the actual key.
Move your backups to a separate account or storage bucket with no access granted to the agent's credentials. If the agent can not reach the backup, it can not wipe it. Test this by checking what your agent_user can access using:
SHOW GRANTS FOR agent_user;
Pro tip: Run a deliberate test in staging before trusting your guardrails in production. Tell your agent to drop a table and see if it gets blocked. If it succeeds in staging, your production setup is at risk.
TOOLS
Trending AI Tools
📊 Clicky - Privacy-friendly real-time web analytics, a Google Analytics alternative with instant traffic insights and no data sharing
🌐 Browser Use Box - A 24/7 personal browser agent that runs on a server with persistent Chrome sessions and Telegram control
⚙️ 49Agents - The first open-source 2D agentic IDE, an infinite canvas for building, visualizing, and running AI agent workflows
🏢 Paperclip - Open-source AI company operating system that lets you hire AI employees, set goals, and track agent budgets and tool calls in one place
NEWS
What Matters in AI Right Now?
Microsoft amended its partnership with OpenAI, allowing OpenAI to deploy products across any cloud provider instead of exclusively on Azure. Microsoft's IP license runs through 2032 but is now non-exclusive, and the company will no longer pay a revenue share to OpenAI, with OpenAI's payments to Microsoft continuing through 2030 under a total cap.
Meta announced partnerships with Overview Energy and Noon Energy to power its AI data centers, reserving up to 1GW of space solar from Overview, whose satellites beam power from geosynchronous orbit to Earth-based farms, plus up to 1GW/100GWh of ultra-long-duration storage from Noon Energy, which holds power for more than 100 hours. Both demos target 2028.
Microsoft launched Agent Mode in Outlook, giving Copilot the ability to triage emails, reschedule meetings, and manage calendar priorities without being prompted at every step. It's rolling out to Frontier early access users now.
Xiaomi open-sourced MiMo-V2.5-Pro, a 1.02T-parameter Mixture-of-Experts model with 42B active parameters and up to 1M tokens context length, trained on 27 trillion tokens with strong performance on complex software engineering and long-horizon agentic tasks.
More than 600 Google employees urged CEO Sundar Pichai to reject a proposed classified Pentagon deal, demanding the company set red lines on military AI applications, according to reports. Google has been rebuilding ties with the US military after years of internal protests over defense contracts.
David Silver, the researcher who built AlphaGo, launched Ineffable Intelligence, a new AI lab that raised $1.1B at a $5.1B valuation. Silver argues the industry's LLM-heavy approach is using human data like a fossil fuel. His company bets on reinforcement learning, teaching agents to discover knowledge on their own, as the only path to true superintelligence.
🧡 Enjoyed this issue?
🤝 Recommend our newsletter or leave a feedback.
How'd you like today's newsletter?
Cheers, Jason







