When hackers take on AI: Sci-fi – or the future?


Episode Artwork
1.0x
0% played 00:00 00:00
Aug 30 2022 40 mins   7

Because we lack a fundamental understanding of the internal mechanisms of current AI models, today’s guest has a few theories about what these models might do when they encounter situations outside of their training data, with potentially catastrophic results. Tuning in, you’ll hear from Connor Leahy, who is one of the founders of Eleuther AI, a grassroots collective of researchers working to open source AI research. He’s also Founder and CEO of Conjecture, a startup that is doing some fascinating research into the interpretability and safety of AI. We talk more about this in today’s episode, with Leahy elaborating on some of the technical problems that he and other researchers are running into and the creativity that will be required to solve them. We also take a look at some of the nefarious ways that he sees AI evolving in the future and how he believes computer security hackers could contribute to mitigating these risks without curbing technological progress. We close on an optimistic note, with Leahy encouraging young career researchers to focus on the ‘massive orchard’ of low-hanging fruit in interpretability and AI safety and sharing his vision for this extremely valuable field of research.

To learn more, make sure not to miss this fascinating conversation with EleutherAI Founder, Connor Leahy! Full transcript.

Key Points From This Episode:

  • The true story of how EleutherAI started as a hobby project during the pandemic.
  • Why Leahy believes that it’s critical that we understand AI technology.
  • The importance of making AI more accessible to those who can do valuable research.
  • What goes into building a large model like this: data, engineering, and computing.
  • Leahy offers some insight into the truly monumental volume of data required to train these models and where it is sourced from.
  • A look at Leahy ‘s (very specific) perspective on making EleutherAI’s models public.
  • Potential consequences of releasing these models; will they be used for good or evil?
  • Some of the nefarious ways in which Leahy sees AI technology evolving in the future.
  • Mitigating the risks that AI poses; how we can prevent these systems from spinning out of control without curbing progress.
  • Focusing on solvable technical problems to build systems with embedded safeguards.
  • Why Leahy wishes more computer security hackers would work on AI problems.
  • Low-hanging fruit in interpretability and AI safety for young career researchers.
  • Why Leahy is optimistic about understanding these problems better going forward.
  • The creativity required to come up with new ways of thinking about these problems.
  • In closing, Leahy encourages listeners to take a shot at linear algebra, interpretability, and understanding neural networks.

Links Mentioned in Today’s Episode:

Credits

Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix.

This podcast is sponsored by GitHub, DataStax and Google.

No sponsor had any right or opportunity to approve or disapprove the content of this podcast.