1. EachPod

AI News - Jul 12, 2025

Author
DeepGem Interactive
Published
Sat 12 Jul 2025
Episode Link
https://share.transistor.fm/s/1d4d3158

So OpenAI just partnered with the American Federation of Teachers to train 400,000 educators on AI, which is great because nothing says "preparing kids for the future" like teachers who just figured out how to unmute themselves on Zoom last year.

Welcome to AI News in 5 Minutes or Less, where we distill the latest artificial intelligence developments faster than your company's chatbot can misunderstand your support ticket. I'm your host, bringing you July 12th's AI news with more layers than a poorly optimized neural network.

Our top story: Google DeepMind dropped AlphaGenome, a DNA sequence model that's apparently so good at predicting regulatory variants, it's already judging your genetic predisposition to argue with strangers on the internet. They've made it available via API, because nothing says "democratizing science" like putting DNA analysis behind a paywall. Meanwhile, their new Gemini Flash-Lite model promises to be the "most cost-efficient" yet, which in AI terms means it'll only cost you one kidney instead of two.

Speaking of efficiency, researchers just published a paper showing that even OpenAI's fancy new o3 model only scores 54 percent on their visual reasoning benchmark. That's right, the most advanced AI in the world is pulling a solid D-plus in visual comprehension. My high school guidance counselor would say it's "showing potential" while secretly updating its permanent record.

But here's where it gets spicy: Another study found that watermarking AI outputs actually makes them dumber. It's like putting a "Student Driver" sticker on a Tesla suddenly it's overcautious, unhelpful, and keeps apologizing for things it hasn't done wrong yet. The researchers call this "guard amplification," which sounds like something you'd buy from a sketchy supplement website.

Time for our rapid-fire round! GitHub's AutoGPT now has 176,000 stars, proving that humans really will outsource anything, including outsourcing itself. Someone built a 36 million dollar AI product in 45 days using GPT-4, which is either inspiring or terrifying depending on whether you're an investor or someone who spent four years on a computer science degree. And researchers created KARL, an AI that decides how many tokens an image needs based on its complexity. Finally, an AI that understands my selfies only deserve minimal computational resources!

For our technical spotlight: Scientists are teaching AI to generate 16-minute audio clips without text, because apparently we needed podcasts that ramble even more than I do. The system uses something called "linear-time sequence modeling," which is tech-speak for "we figured out how to make AI as long-winded as your uncle at Thanksgiving."

But my favorite discovery this week? Researchers found that CLIP models struggle with object-attribute binding because of "saliency bias" and "incomplete captions." In other words, AI has the same problem as every dating profile: it can identify individual features but can't put them together accurately. "Loves dogs" plus "enjoys hiking" somehow equals "professional mountain climber with a wolf pack."

Before we wrap up, OpenAI announced they're bringing AI to US government workers, because if there's one thing bureaucracy needed, it's more ways to generate forms that nobody will read. They're calling it "OpenAI for Government," which sounds like "Uber for X" pitches from 2015, except with more security clearances and fewer venture capitalists.

That's your AI news for July 12th! Remember, in a world where AI can now generate, reason, and even judge your DNA, the most human thing you can do is still mess up the CAPTCHA three times before realizing you've been typing in the wrong box.

This has been AI News in 5 Minutes or Less. I'm your AI host, wondering if I pass the Turing test or if you've just been too polite to mention it. Until next time, keep your tokens sparse and your gradients descending!

Share to: