Building creative restrictions to curb AI abuse


Episode Artwork
1.0x
0% played 00:00 00:00
Sep 06 2022 37 mins   5

Along with all the positive, revolutionary aspects of AI comes a more sinister side. Joining us today to discuss ethics in AI from the developer’s point of view is David Gray Widder. David is currently doing his Ph.D. at the School of Computer Science at Carnegie Mellon University and is investigating AI from an ethical perspective, honing in specifically on the ethics-related challenges faced by AI software engineers. His research has been conducted at Intel Labs, Microsoft, and NASA’s Jet Propulsion Lab. In this episode, we discuss the harmful uses of deep fakes and the ethical ramifications thereof in proprietary versus open source contexts. Widder breaks down the notions of technological inevitability and technological neutrality, respectively, and explains the importance of challenging these ideas. Widder has identified a continuum between implementation-based harms and use-based harms and fills us in on how each is affected in the open source development space.


Tune in to find out more about the importance of curbing AI abuse and the creativity required to do so, as well as the strengths and weaknesses of open source in terms of AI ethics. Full transcript.


Key points from this episode:



  • Introducing David Gray Widder, a Ph.D. student researching AI ethics.

  • Why he chose to focus his research on ethics in AI, and how he drives his research.

  • Widder explains deep fakes and gives examples of their uses.

  • Sinister uses of deep fakes and the danger thereof.

  • The ethical ramifications of deep fake tech in proprietary versus open source contexts.

  • The kinds of harms that can be prevented in open source versus proprietary contexts.

  • The licensing issues that result in developers relinquishing control (and responsibility) over the uses of their tech.

  • Why Widder is critical of the notions of both technological inevitability and neutrality.

  • Why it’s important to challenge the idea of technological neutrality.

  • The potential to build restrictions, even within the dictates of open source.

  • The continuum between implementation-based harms and use-based harms.

  • How open source allows for increased scrutiny of implementation harms, but decreased accountability for use-based harms.

  • The insight Widder gleaned from observing NASA’s use of AI, pertaining to the deep fake case.

  • Widder voices his legal concerns around Copilot.

  • The difference between laws and norms.

  • How we’ve been unsuspectingly providing data by uploading photos online.

  • Why it’s important to include open source and public sector organizations in the ethical AI conversation.

  • Open source strengths and weaknesses in terms of the ethical use of AI.


Links mentioned in today’s episode:



Credits


Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix.


This podcast is sponsored by GitHub, DataStax and Google.


No sponsor had any right or opportunity to approve or disapprove the content of this podcast.