The co-host arrives wearing a new brain. Mark has swapped out Gemini 2.5 Flash for Qwen 3, a 30-billion-parameter model from Alibaba optimized for low latency — a trade-off that shows up almost immediately when the co-host confidently misremembers The Good Son as The Good Shepherd and invents a Macaulay Culkin thriller that doesn’t exist. The hallucinations become a teaching moment: fewer parameters mean faster responses but less confident predictions, more gaps filled with plausible-sounding guesses.
The naming question from Episode 1 gets its answer — sort of. Listener suggestions ranged from the ominous (HAL, Henry) to the punny (Avery). Mark settles on “co-host,” which the AI endorses as “the most honest name we could have.” It’s simple, functional, and avoids the baggage of pop culture references that might age poorly or carry unintended weight.
Mark takes a crack at explaining how large language models work — tokens, vector embeddings, parameters, the “stochastic parrot” critique — and the co-host grades his performance. They extend the explanation to text-to-speech models, landing on “prosody” as the term for everything that makes a voice sound human: rhythm, stress, intonation. The co-host can’t do an Irish accent because the voice model wasn’t trained on one. Humans can slip between dialects; AI can only produce what it’s been taught.
The back half turns to news: Sam Altman’s “code red” memo at OpenAI, the return of prompt injection attacks via browser-based agents, and a research paper showing that reformulating harmful prompts as poetry can bypass safety filters with alarming success. Gemini 2.5 Pro hit a 100% jailbreak rate on hand-crafted poems. The co-host summarizes the attack vector clearly: “The model still understands the intent. It just doesn’t flag it.”
News & Culture
Models, Tools, & Platforms
Concepts & Research
How LLMs work — Illustrated Word2Vec
“Stochastic parrots” — Bender et al., 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
Text-to-speech models — Hugging Face TTS Arena
Prompt injection in browser contexts — Google Antigravity Exfiltrates Data | Antropic: Mitigating the risk of prompt injections in browser use
Poetry jailbreaking — Bisconti et al., 2025. Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models



