Ads Dawson, release lead and founding member for the Open Web Application Security Project (OWASP) Top 10 for Large Language Model Applications project, has no shortage of opinions on securing generative artificial intelligence (GenAI) and LLMs. With rapid adoption across the tech industry, GenAI and LLMs are dominating the conversation in the infosec community. But Ads says the security approach is similar to other attack vectors like APIs. First, you need to understand the context of AI-related vulnerabilities and how an attacker might approach hacking a particular AI model.
In the latest episode of WE’RE IN!, Ads talks about including threat modeling from the design phase when integrating GenAI into applications, and how he uses AI in his red teaming and application security work.
Listen to hear more about:
The misuse of AI, such as creating deep fakes for financial gain or manipulating powerful systems like the stock market
The role of governments in securing the AI space and the concept of “safe” AI
How the infosec community can contribute to OWASP frameworks