Presented by
Good morning, AI enthusiasts. Remember when generating an image meant choosing from a few art styles and hoping for the best? OpenAI just ended that era, shipping GPT Image 2.0 with thinking-level intelligence that handles editorial layouts, multilingual typography, and photorealistic portraits in the same tool.
In today's recap:
OpenAI's GPT Image 2.0 brings thinking-level visual creation
SpaceX and Cursor join forces in a $60B AI deal
Build a complete brand visual kit with GPT Image 2
Google launches Deep Research Max with API access
4 new AI tools, prompts, and more
OPENAI
ChatGPT Images 2.0 sets a new visual generation standard
Recaply: OpenAI just introduced ChatGPT Images 2.0, a state-of-the-art image model with thinking-level intelligence, sharper editing, richer layouts, and native multilingual text rendering across dozens of global scripts.
Key details:
Reasons about visual tasks the way language models reason about text, allowing users to generate photorealistic images, editorial layouts, manga, and pixel art in a single, context-aware conversation.
Renders multilingual text accurately across 10+ global scripts, including Japanese, Korean, Arabic, and Devanagari, with styles spanning from documentary photography to manga and Art Deco posters.
OpenAI describes the model as "a visual thought partner," positioning it as a creative collaborator rather than just an image generator, according to the launch page.
Available now in ChatGPT for Plus and Pro users; API access also open to developers building production-grade visual workflows today.
Why it matters: GPT Image 2.0 feels like the first image model built for people who actually produce things, not just people who want to experiment with AI. It's also likely to reshape the opening of every design brief, since a client can now walk in with a working visual draft before any agency pitches. The design industry doesn't end today, but it just got a lot smaller.
PRESENTED BY CLICO
The AI that connects every tab you have open
Recaply: You're researching a topic. Five tabs open, three articles half-read, one PDF you can't summarize fast enough. Most AI tools make you copy from each tab, switch over, paste your context, then switch back and repeat. Clico fixes that at the source: it lives inside your browser and lets you pull any open tab directly into your prompt with a single @-mention.
With Clico, you get:
Multi-tab context: type @ inside any Clico prompt to reference other open tabs, so you can cross-reference pages, compare tools, or synthesize sources without leaving where you are
Clico It: summon AI at your cursor anywhere on the page and get output exactly where you need it, no tab switching required
Memo It: double-press command for an instant summary of any page or PDF, so you can decide in seconds whether a source is worth reading in full
Select to rewrite: highlight any text on the page and ask Clico to rewrite it in place, useful when you need to adapt content you're reading
CURSOR & SPACEX
SpaceX and Cursor team up to build the world's best coding AI
Recaply: Cursor just announced a partnership with SpaceX to use the Colossus supercomputer for model training, while SpaceX gains the right to acquire Cursor for $60B or pay $10B for the collaboration.
Key details:
Cursor will use xAI's Colossus infrastructure to scale model training beyond its current compute bottleneck, with each scale-up directly translating to more capable AI coding models for developers.
SpaceX's Colossus has a million H100-equivalent GPUs; Cursor's Composer RL already scaled 20x between releases, with Composer 2 reaching frontier-level performance at a fraction of competing model costs.
The deal gives SpaceX two paths: acquire Cursor outright for $60B or pay $10B for the collaboration, an unusual dual-path structure that signals a long-term strategic play, according to SpaceX's post.
The partnership begins immediately, with SpaceX's option to acquire Cursor exercisable later in 2026 at either the $60B acquisition price or $10B collaboration fee.
Why it matters: Cursor is one of the most-used AI coding tools in the world, and compute has always been the limit. With SpaceX's Colossus now in the picture, Cursor's next model could reach a level no coding AI has achieved through training alone. For developers choosing between Cursor, GitHub Copilot, and Claude Code, this deal just reshuffled the board. The whole coding tools market just got a lot more interesting.
GUIDES
Build a complete brand visual kit with GPT Image 2
Recaply: In this tutorial, you will learn how to use GPT Image 2 in ChatGPT to generate a hero image, social graphics, and thumbnails that match your brand, skipping the need for a designer or design software.
Step-by-step:
Go to chatgpt.com/images and open a new chat. You'll need ChatGPT Plus or Pro. Describe your brand in a short setup prompt: "I'm building a visual kit for [brand name]. My brand colors are [HEX codes], tone is [adjectives e.g. bold and modern], and my audience is [description]."
Generate a hero image: prompt "Create a hero banner for [brand name] in a [style e.g. minimalist editorial] aesthetic using my brand colors, with the tagline '[tagline]' rendered clearly in the image." Follow up with "make the background darker" or "use more negative space" until it's right.
Request a social graphic set in one prompt: "Using the same brand style, create a square Instagram post, a wide LinkedIn banner, and a vertical TikTok thumbnail for [topic]. Keep all three visually consistent." GPT Image 2 holds your brand context across the full conversation.
Create a YouTube thumbnail: prompt "Generate a thumbnail for a video titled '[title]'. Use high contrast, bold text, and a [color or graphic element] that fits the [brand] style." Ask for 3 variations at once to find the strongest version faster.
Refine with follow-up prompts until each asset is production-ready, then download and save to your brand folder. For your next session, paste the original brand brief at the start to restore context and maintain consistency.
Pro tip: Upload an existing brand asset, like a logo or previous campaign image, in your first message. GPT Image 2 can extract your color palette and visual style from it, cutting the number of iterations needed to match your brand.
Google launches Deep Research Max for autonomous enterprise research
Recaply: Google DeepMind just unveiled Deep Research and Deep Research Max, two autonomous research agents built on Gemini 3.1 Pro, with MCP support, native chart generation, and API access for developers.
Key details:
Deep Research Max uses extended test-time compute to iteratively reason, search, and refine reports, making it suited for overnight background workflows like delivering due diligence summaries to analyst teams by morning.
Available now via the Gemini API for paid tiers, with the agent combining Google Search, remote MCP servers, file uploads, code execution, and URL context in a single research pass.
Google is partnering with FactSet, S&P Global, and PitchBook to build MCP servers for their financial data, targeting professional research as a core early use case.
Deep Research replaces the December 2025 Gemini API preview; both variants are available now on paid Gemini API tiers for developers.
Why it matters: Google's Deep Research has been a good consumer product for a while, but the API release opens it up to developers. They can now pull from private databases, the public web, and live financial feeds in one pass.
TOGETHER WITH WEAREDEVELOPERS
The World's Biggest Dev Event Hits Silicon Valley
From AI and cloud to DevOps and security — WeAreDevelopers World Congress brings the entire modern stack to San Jose. 500+ speakers. 10,000+ developers. One epic September. Use code GITPUSH26 for 10% off.
TOOLS
Trending AI Tools
🎨 GPT Image 2 - OpenAI's newest image model with thinking-level intelligence
🎬 Odyssey-2 Max - Odyssey's largest world model, scoring 58.52 on VBench 2 physics
🔍 Almanac MCP - Community-powered wiki that turns Claude Code into a deep research agent
🤖 Bud - AI Human Emulator with a full computer, browser, and communication tools
NEWS
What Matters in AI Right Now?
Steve Yegge just doubled down on claims that Google runs a "two-tier" AI system, with DeepMind using Anthropic's Claude heavily while the rest of the company lags behind. Google DeepMind CEO Demis Hassabis called the claims "absolute nonsense."
Meta just announced a new tracking program called the Model Capability Initiative, capturing employee mouse movements, keystrokes, and screen snapshots on work apps to train AI models. Meta CTO Andrew Bosworth is rebranding the effort as the "Agent Transformation Accelerator."
Anthropic's Mythos was reportedly accessed by an unauthorized group via a Discord server and a third-party contractor, according to Bloomberg. Anthropic says it's investigating but has found no evidence its systems were compromised.
Mozilla revealed that Anthropic's Mythos Preview found 271 zero-day vulnerabilities in Firefox 150 before release, compared to just 22 found by Opus 4.6 in Firefox 148, with Firefox CTO Bobby Holley saying defenders have "rounded the curve."
YouTube opened its proprietary AI deepfake detection tool to all of Hollywood, letting actors, athletes, musicians, and creators request removal of AI-generated deepfakes regardless of whether they have a YouTube channel.
NeoCognition emerged from stealth with $40M in seed funding to build self-learning AI agents that can specialize in any domain, with the round co-led by Cambium Capital and Walden Catalyst Ventures and angels including Intel CEO Lip-Bu Tan.
Google open-sourced the DESIGN.md specification from its Stitch tool, a shared standard that lets AI agents understand design system intent, brand rules, and WCAG accessibility constraints across any platform or codebase.
EVENTS
Google Cloud Next 2026: April 22-24, 2026 • Las Vegas, NV
Anthropic Code with Claude: May 6, 2026 • San Francisco, CA
Notion Hackathon: May 16-17 • San Francisco, CA
🧡 Enjoyed this issue?
🤝 Recommend our newsletter or leave a feedback.
How'd you like today's newsletter?
Cheers, Jason










