1. EachPod

Multi - Armed Bandits

Author
[email protected] (Ben Jaffe and Katie Malone)
Published
Mon 07 Mar 2016
Episode Link
https://soundcloud.com/linear-digressions/multi-armed-bandits

Multi-armed bandits: how to take your randomized experiment and make it harder better faster stronger. Basically, a multi-armed bandit experiment allows you to optimize for both learning and making use of your knowledge at the same time. It's what the pros (like Google Analytics) use, and it's got a great name, so... winner!

Relevant link: https://support.google.com/analytics/answer/2844870?hl=en

Share to: