How to secure AI systems


Episode Artwork
1.0x
0% played 00:00 00:00
Feb 09 2023 40 mins   4

With so many artificial systems claiming “intelligence” available to the public, making sure they do what they’re designed to is of the utmost importance. Dr. Bruce Draper, Program Manager of the Information Innovation Office at DARPA joins us on this bonus episode of Deep Dive: AI to unpack his work in the field and his current role. We have a fascinating chat with Draper about the risks and opportunities involved in this exciting field, and why growing bigger and more involved Open Source communities is better for everyone. Draper introduces us to the Guaranteeing AI Robustness Against Deception (GARD) Project, its main short-term goals and how these aim to mitigate exposure to danger while we explore the possibilities that machine learning offer. We also spend time discussing the agency’s Open Source philosophy and foundation, the AI boom in recent years, why policy making is so critical, the split between academic and corporate contributions, and much more. For Draper, community involvement is critical to spot potential issues and threats. Tune in to hear it all from this exceptional guest! Read the full transcript.

Key points from this episode:

  • The objectives of the GARD project and DARPA’s broader mission.
  • How the Open Source model plays into the research strategy at DARPA.
  • Differences between machine learning and more traditional IT systems.
  • Draper talks about his ideas for ideal communities and the role of stakeholders.
  • Key factors to the ‘extended summer of AI’ we have been experiencing.
  • Getting involved in the GARD Project and how the community makes the systems more secure.
  • The main impetus for the AI community to address these security concerns.
  • Draper explains the complications of safety-critical AI systems.
  • Deployment opportunities and concurrent development for optimum safety.
  • Thoughts on the scope and role of policy makers in the AI security field.
  • The need for a deeper theoretical understanding of possible and present threats.
  • Draper talks about the broader goal of a self-sustaining Open Source community.
  • Plotting the future role and involvement of DARPA in the community.
  • The partners that DARPA works with: academic and corporate.
  • The story of how Draper got involved with the GARD Project and adversarial AI.
  • Looking at the near future for Draper and DARPA.
  • Reflections on the last few years in AI and how much of this could have been predicted.

Links mentioned in this episode:

Credits

Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix.

This podcast is sponsored by GitHub, DataStax and Google.

No sponsor had any right or opportunity to approve or disapprove the content of this podcast.