A deep dive into OpenAI's GPT-4.5 Orion: the two-year build, Azure-backed infrastructure, and the shift from compute-bound to data-bound bottlenecks. We dissect the full training pipeline—unsupervised pretraining, supervised fine-tuning, and RLHF—plus planning, co-design, real-world hiccups (like the PyTorch summation bug), system-card insights, and multilingual/safety implications that shape its enterprise value and pricing.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC