Beyond Scaling Laws: Understanding Transformer Performance with Associative Memory


Episode Artwork
1.0x
0% played 00:00 00:00
May 15 2024 13 mins   3



Increasing Transformer model size doesn't always improve performance. A theoretical framework using associative memories and Hopfield networks explains memorization and performance dynamics in transformer-based language models.


https://arxiv.org/abs//2405.08707


YouTube: https://www.youtube.com/@ArxivPapers


TikTok: https://www.tiktok.com/@arxiv_papers


Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016


Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers


--- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/support