1. EachPod

Attention in Neural Nets

Author
[email protected] (Ben Jaffe and Katie Malone)
Published
Mon 17 Jun 2019
Episode Link
https://soundcloud.com/linear-digressions/attention-in-neural-nets

There’s been a lot of interest lately in the attention mechanism in neural nets—it’s got a colloquial name (who’s not familiar with the idea of “attention”?) but it’s more like a technical trick that’s been pivotal to some recent advances in computer vision and especially word embeddings. It’s an interesting example of trying out human-cognitive-ish ideas (like focusing consideration more on some inputs than others) in neural nets, and one of the more high-profile recent successes in playing around with neural net architectures for fun and profit.

Share to: