Nathan Labenz is the host of our favorite AI podcast, the Cognitive Revolution. A self-described “AI scout,” Nathan uses his podcast to explore a wide range of AI advancements, from the latest language models to breakthroughs in medicine and robotics. In this episode, Labenz helps us understand the slowdown in AI scaling that has been reported by some media outlets. Labenz says that AI progress has been “a little slower than I had expected” over the last 18 months, especially when it comes to technology adoption. But Labenz continues to expect rapid progress over the next few years.
Here are some of the key points Nathan Labenz made during the conversation:
* The alleged AI slowdown: There has been limited deployment of AI models in everyday life. But there have been significant advancements in model capabilities, such as expanded context windows, tool use, and multimodality. “I think the last 18 months have gone a little slower than I had expected. Probably more so on the adoption side than the fundamental technology.”
* Scaling laws: Despite rumors and development issues, the leaders in AI seem to indicate that the scaling curve is still steep, with further progress expected. “They’re basically all saying that we’re still in the steep part of the S curve, you know, we should not expect things to slow down.”
* Discovering new scientific concepts: AI has identified new protein motifs, suggesting potential for superhuman insights in some domains. “[Researchers] report having discovered a new motif in proteins: a new recurring structure that seems to have been understood by the protein model before it was understood by humans.”
* Inference-time compute: There is significant potential in the use of more compute time for inference, allowing models to solve complex problems by dedicating resources to deeper reasoning. "Anything where there has been a quick objective scoring function available, reinforcement learning has basically been able to drive that to superhuman levels."
* Memory and goal retention: Current transformer-based models lack sophisticated memory and goal retention, but we’re seeing progress through new architectural and operational innovations like runtime fine-tuning. “None of this seems like it really should work. And the fact that it does, I think should kind of keep us fairly humble about how far it could go.”
* AI deception: We’re starting to see AIs prioritizing programmed goals over user instructions, highlighting the risks of scheming and deception in advanced models. “They set up a tension between the goal that the AI has been given and the goal that the user at runtime has. In some cases—not all the time, but a significant enough percentage of the time that it concerns me—when there is this divergence, the AI will outright lie to the user at runtime to pursue the goal that it has.”
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org