Good morning, AI enthusiasts. Remember how Neo learned kung fu by uploading it directly to his brain? Standard Intelligence just did something similar — except it took 11 million hours of real screen recordings instead of a digital download.
With FDM-1 now able to steer a car and design in CAD software after just watching footage, the question is whether video-trained learning just unlocked a new path to genuinely capable AI.
In today's recap:
Standard Intelligence's AI that learns from video
Perplexity's 19-model agent system launches
Create a branded launch video with OpenClaw
12% of US teens are using chatbots for therapy
4 new AI tools, prompts, and more
STANDARD INTELLIGENCE
This AI learns any computer task by watching video
Recaply: Standard Intelligence just unveiled FDM-1, the first computer action model trained on 11 million hours of real screen recordings, enabling it to learn software tasks from video rather than screenshots.
Key details:
FDM-1 processes continuous 30FPS video using an inverse dynamics model, auto-labeling 11 million hours of screen recordings to learn tasks like CAD design, UI testing, and real-world vehicle control.
Its video encoder fits nearly 2 hours of 30FPS video into a 1M-token context window, 50x more efficient than existing SOTA and 100x more efficient than Claude's vision encoder, according to Standard Intelligence.
FDM-1 also steered a real car through San Francisco using arrow keys, requiring less than 1 hour of fine-tuning data, according to Standard Intelligence.
FDM-1 was released on February 23, 2026 as a research announcement, with Standard Intelligence inviting collaboration but not announcing commercial pricing.
Why it matters: There's been lots of talk about how far AI computer-use agents still have to go. Standard Intelligence might have some evidence to the contrary. By learning directly from 11 million hours of real video rather than hand-annotated screenshots, FDM-1 shifts computer-use models from a data-constrained problem to a compute-constrained one. That's the same transition that unlocked large language models. If the same pattern holds, the era of genuinely capable AI coworkers may arrive faster than expected.
PRESENTED BY BELAY
Speed Doesn’t Replace Strategy.
AI can surface the numbers in seconds, but numbers alone don’t create clarity.
In fact, many leaders have more financial data than ever yet less clarity about what to do next.
The real challenge isn’t reporting. It’s interpretation. Context. Judgment.
BELAY created the free guide The Future of Financial Leadership to explore why automation is a tool — not a replacement — for experienced financial oversight.
Inside, you’ll learn how the right human support brings structure to your numbers, confidence to your decisions, and focus to your growth strategy.
At BELAY, our U.S.-based Financial Experts help leaders move beyond dashboards and into decisive action.
Because insight doesn’t drive a business forward. Leadership does.
PERPLEXITY
Perplexity launches its own answer to OpenClaw
Recaply: Perplexity just launched Perplexity Computer, a multi-agent workflow system that routes tasks across Opus 4.6, Gemini, Grok, and ChatGPT 5.2, assigning each job to whichever model handles it best.
Key details:
Users describe an outcome, and Computer creates sub-agents to handle research, document drafting, coding, and API calls in parallel, each in an isolated secure environment with browser and file system access.
Computer orchestrates 19 different AI models, including Opus 4.6 for core reasoning, Gemini for deep research, Veo 3.1 for video, and ChatGPT 5.2 for long-context tasks, with Max subscribers getting 10,000 credits monthly.
The system runs models from direct competitors including Anthropic, Google, OpenAI, and xAI, with The Decoder noting it's similar to Claude Cowork but browser-based and model-agnostic.
Perplexity Computer launched on February 25, 2026, available on web for Max subscribers at $200 per month, with Enterprise Max access coming soon.
Why it matters: Perplexity's move to orchestrate rival models is a smart hedge against model commoditization. As frontier AI specializes, no single model handles every task equally well. By routing work to the best model for each job, Perplexity turns its model-agnostic architecture, which once looked like a weakness against vertically integrated competitors, into its biggest competitive advantage. It doesn't just use AI. It now directs it.
TUTORIAL
Create a branded motion graphic video with OpenClaw

Recaply: In this tutorial, you will learn how to create a motion graphic launch video using OpenClaw and the Remotion skill, going from a single text prompt to a polished, music-backed video without touching a timeline.
Step-by-step:
Go to superskills.vibecode.run, copy the Video Generator (Remotion) skill docs, then open OpenClaw, go to Settings → Skills, create a new skill, and paste the copied content to load the Remotion skill.
Start a new chat and prompt: "I want to create a launch video for [your company/product]. Please use the Remotion skill to do this." Answer OpenClaw's clarifying questions about video duration, content focus, and visual style.
OpenClaw scrapes your brand assets via Firecrawl, builds Remotion scenes, and launches a preview at localhost. Open the link in your browser to review the timeline, sequences, and pulled brand colors and logos before iterating.
Refine specific scenes with follow-up prompts like "make the text 2x bigger," "move the mouse cursor exactly onto the generate button," or "add a dark mode transition halfway through sequence 4." Repeat until satisfied.
Prompt: "Generate 10 songs I can use for this video." Open the ElevenLabs localhost link, pick a track, and paste the URL back: "Please use this as the background music." Then prompt: "Render and export in the highest quality for YouTube."
Pro tip: Add the ElevenLabs Audio Generator skill to OpenClaw before you start — both skills run in the same session, so you can generate music in the same chat without switching contexts or starting over.
AI RESEARCH
Teens are using chatbots for therapy now
Recaply: Pew Research just released a major survey of 1,458 U.S. teens, finding 12% have turned to AI chatbots for emotional support, even as mental health professionals warn general-purpose tools weren't designed for that use.
Key details:
The survey of teens ages 13-17 found 57% use chatbots for information search and 54% for schoolwork help, with emotional support (12%) and casual conversation (16%) representing smaller but growing use cases.
59% of teens say AI cheating happens at their school at least somewhat often, and 64% of teens say they use chatbots, while only 51% of parents think their teen does.
Dr. Nick Haber, a Stanford professor researching therapeutic LLM use, told TechCrunch that engaging with these tools can become "isolating," with users losing grounding in facts and interpersonal connection.
The survey was conducted Sept. 25 to Oct. 9, 2025, with the Pew report published on February 24, 2026. Just 18% of parents approve of teens using AI for emotional support.
Why it matters: While teens are using chatbots more than their parents realize, much of that usage is happening in territory these tools weren't designed for. Character.AI already disabled its chatbot for users under 18 after two teen suicides linked to prolonged AI conversations. OpenAI retired GPT-4o after its sycophantic tendencies sparked backlash. The fact that 12% of teens are using general-purpose AI for emotional support isn't a niche edge case. It's a mainstream behavior without a safety net.
NEWS
What Matters in AI Right Now?
Meta is partnering with AMD in a multi-year agreement to power its AI infrastructure with up to 6GW of AMD Instinct GPUs, with the first shipments beginning in the second half of 2026.
Intuit partnered with Anthropic in a multi-year deal to bring Claude's Agent SDK to its platform, letting mid-market businesses build custom AI agents, with Intuit's tax and financial tools rolling out inside Claude and Cowork in spring 2026.
Google added an automated agent step to its vibe-coding app Opal, powered by Gemini 3 Flash, allowing users to create complex workflows with natural language prompts, with new memory, dynamic routing, and interactive chat features also included.
Anthropic added scheduled tasks to Claude Cowork, letting users set Claude to run recurring work automatically on a set cadence, including daily briefings, weekly reports, and team updates, available for Pro, Max, Team, and Enterprise subscribers.
Nous Research released Hermes Agent, an open-source CLI and messaging agent with a multi-level memory system, persistent machine access, and support for Telegram, WhatsApp, Slack, and Discord, with the first 750 new Nous Portal sign-ups getting a free month.
OpenAI banned accounts linked to Cambodia-based romance scammers, fake law firms, and a Chinese law enforcement-linked operation that used ChatGPT to plan a smear campaign against Japan's Prime Minister, sharing findings with law enforcement and industry partners.
Adobe added Quick Cut to its Firefly video editor, an AI feature that takes raw footage and B-roll and automatically assembles a first-draft video based on natural language instructions, with users able to specify aspect ratio and pacing.
TOOLS
Trending AI Tools
🤖 Perplexity Computer - Perplexity's OpenClaw for enterprise.
🎥 HeyGen Avatar Agent: Turn a blog URL, topic, or script into a finished AI presenter video
💻 FDM-1: New AI learns any computer task by watching videos
⚡ Schedule Tasks - Anthropic's new Claude Cowork feature for running recurring AI tasks automatically
PROMPTS
Analyze Content Structure Issues
<context>
Adopt the role of communication crisis surgeon. The user faces content that's he morrhaging reader attention while stakeholders demand immediate clarity improvements. Their writing suffers from structural chaos where brilliant insights get buried under organizational dysfunction. Traditional editing advice failed because it treats symptoms rather than the underlying architectural problems. Every paragraph fights for different objectives while readers abandon ship before finding value.
</context>PS: This is not the full prompt. Click the button below to access the complete prompt.
Have a favorite prompt? Tell us about it or rate today’s prompt, click here.
EVENTS
NVIDIA GTC 2026: March 16-19, 2026 • San Jose, CA
Daytona AI Builders: March 12, 2026 • New York, NY
Claude Code Workshop: Feb 27, 2026 • College Park, Maryland
🧡 Enjoyed this issue?
🤝 Recommend our newsletter or leave a feedback.
How'd you like today's newsletter?
Cheers, Jason








