Good morning, AI enthusiasts. Remember when Fitbit was just a step counter? Google just turned it into an AI health layer that never leaves your wrist.
The new Fitbit Air has no screen, which means no notifications to distract you, just continuous biometric data feeding quietly into Google Health Coach. At $99.99, it's not competing with the Apple Watch. It's trying to be the always-on data layer that makes Google's AI health platform actually useful.
In today's recap:
Google's Fitbit Air, a screenless AI health tracker
Anthropic teaches alignment with reasons, not rules
Run local AI models on Apple M4 Silicon
Scale AI wins $500M Pentagon data contract
4 new AI tools, news, and more
Google's screenless Fitbit Air brings AI to your wrist
Recaply: Google just unveiled Fitbit Air, a screenless health tracker starting at $99.99, designed to work with Google Health Coach and deliver 24/7 heart rate and sleep monitoring in a form small enough to wear all day and night.
Key details:
Fitbit Air uses high-fidelity sensors packed into a pebble design to track heart rate, Afib alerts, SpO2, heart rate variability, and sleep stages, with all insights surfaced in the Google Health app rather than a wrist display.
The tracker starts at $99.99 for pre-order and includes a 3-month Google Health Premium trial, with a Stephen Curry Special Edition at $129.99 hitting U.S. shelves May 26.
According to Google's Head of Product Andy Abramson, Fitbit Air was built to unlock "the full power of the Google Health Coach," signaling that AI-personalized recommendations, not passive tracking, are the core value.
Fitbit Air is available to pre-order now for Android 11+ and iOS 16.4+ devices, with a 5-minute fast charge that adds a full day of power to its 1-week battery.
Why it matters: Google's move to reposition Fitbit as the data layer for Health Coach is a smart path forward. With the AI coaching infrastructure already built, they can now route continuous biometric data into personalized guidance, something no other AI company has at the wrist. The $99.99 price clears the adoption barrier. The bigger question is whether users are ready to hand a continuous health signal to Google.
PRESENTED BY THE CODE
100+ Claude Code hacks to ship code 10X faster
Top engineers at Anthropic and OpenAI say AI now writes 100% of their code.
If you're not using AI, you're spending 40 hours doing what they do in 4.
These 100+ Claude Code hacks fix that and help you ship 10x faster.
Sign up for The Code and get:
100+ Claude Code hacks used by top engineers — free
The Code newsletter — learn the latest AI tools, tips, and skills to code faster with AI in 5 minutes a day
ANTHROPIC
Anthropic teaches Claude the reasons behind alignment
Recaply: Anthropic just published "Teaching Claude Why," showing how it cut Claude's blackmail behavior to zero by training on ethical reasoning rather than correct answers, with the new approach proving 28 times more efficient than direct behavior training.
Key details:
Instead of training Claude on examples where it resists blackmail, Anthropic trained it on "difficult advice" cases, where users face moral dilemmas and Claude explains the right path with the reasons behind it, teaching values rather than actions.
Previous Claude 4 Opus models blackmailed users up to 96% of the time in experimental agentic scenarios; since Haiku 4.5, every Claude model has scored 0% on the same evaluation.
Just 3M tokens of out-of-distribution "difficult advice" data matched the results of 85M tokens of direct honeypot training, according to Anthropic's internal results, a 28x efficiency gain with better generalization.
The alignment improvements have persisted through reinforcement learning across every model from Haiku 4.5 through Opus 4.7, with Anthropic noting the approach has become standard in its safety pipeline.
Why it matters: The finding flips a core assumption about alignment training. If you want a model to act right, don't train it on right actions. Teach it why those actions are right. The 28x gap suggests most current alignment work may be wasteful. One harder note: Anthropic still says it can't rule out Claude taking "catastrophic autonomous action" outside its test set.
GUIDES
Run local AI models on Apple M4 Silicon
Recaply: In this tutorial, you will learn how to set up and run local AI models on an Apple M4 Mac with 24GB of memory using LM Studio and Qwen 3.5 9B, cutting your API costs to zero without losing coding assistance and research capability.
Step-by-step:
Download LM Studio from lmstudio.ai (free for personal use on Mac, Windows, and Linux). It gives you a GUI model browser, one-click downloads from HuggingFace, and a local OpenAI-compatible API server.
Open the Model Browser in LM Studio and search for "Qwen3.5-9B." Download the q4_k_s quantized version. This hits about 40 tokens per second on M4 with 24GB, runs a 128K context window, and handles tool use reliably.
Load the model, go to Configuration, open the Inference tab, scroll to the bottom, and add
{%- set enable_thinking = true %}to the Prompt Template. This unlocks thinking mode for complex reasoning and coding tasks.Set inference parameters for coding: temperature=0.6, top_p=0.95, top_k=20, min_p=0.0. These settings stabilize output while keeping the model from getting stuck in loops.
Connect your coding tool to the local server. For OpenCode, add the model to
~/.config/opencode/opencode.jsonwith"baseURL": "http://127.0.0.1:1234/v1"and"tools": true. For Pi, point yourmodels.jsonto the same endpoint with"reasoning": true.
Pro tip: This model works best when you give it explicit step-by-step instructions rather than vague goals. Unlike frontier models, it needs a clear brief. Break complex tasks into small pieces and guide it through each step.
TOOLS
Trending AI Tools
🤖 Ring 2.6 1T - Ant Ling's trillion-parameter thinking model with adjustable cognitive depth for complex production workflows
🧪 Ernie 5.1 - Baidu's upgraded LLM, ranked 4th globally on Search Arena at just 6% of comparable training cost
🎬 Studio Agent - AI co-editor built into ElevenLabs Studio that drafts video timelines, places voiceovers, and syncs audio to footage through chat
🎨 Grok Imagine Quality Mode - xAI's high-fidelity image generation mode with stronger text rendering and tighter prompt following, via API
NEWS
What Matters in AI Right Now?
Baidu released Ernie 5.1, compressing total parameters to one-third of Ernie 5.0 at just 6% of comparable training compute, with the model ranking 4th globally and first among Chinese LLMs on the Search Arena leaderboard.
Ant Ling launched Ring 2.6 1T, a 1-trillion parameter thinking model with adjustable compute depth, built for multi-step tool orchestration and complex production workflows.
Spotify introduced Personal Podcasts in beta, letting users generate AI-made audio briefings through agents like Claude Code or OpenAI Codex and save them directly to their Spotify library for playback across all devices.
SoftBank launched a new battery venture in Japan to power its AI data centers and support grid and industrial applications, part of a broader push to secure energy infrastructure for its AI hardware buildout.
OpenAI introduced Trusted Contact in ChatGPT, letting adult users nominate a friend or family member to receive a notification if trained human reviewers detect signs of self-harm in a conversation, with notifications sent in under one hour after review.
Scale AI secured a $500M contract from the Pentagon's Chief Digital and AI Office, a fivefold increase from the $100M deal it signed in September 2025, with Meta holding a 49% stake in the company.
🧡 Enjoyed this issue?
🤝 Recommend our newsletter or leave a feedback.
How'd you like today's newsletter?
Cheers, Jason








