#30 - Dr. Rumman Chowdhury - How to Fix AI Before It's Too Late?


Episode Artwork
1.0x
0% played 00:00 00:00
Feb 01 2025 66 mins   1

We're joined by the US Science Envoy for AI, Dr.Rumman Chowdhury, who's a leading expert in responsible AI. We uncover the ethical, technical, and societal implications of artificial intelligence.


As AI rapidly eats up the world, the question is what happens when it doesn’t align with human values? How do we navigate the risks of bias, misinformation, and hallucination in AI systems?


Dr. Chowdhury has been at the forefront of AI governance, red teaming, and AI risk mitigation. She has worked with global institutions, governments, and tech companies to make AI more accountable, safe, and equitable.


From her time at Twitter’s (now X) Machine Learning Ethics Transparency and Accountability team to founding Humane Intelligence, she has actively shaped policies that determine how AI interacts with human society.


We dive deep into:


- AI bias, disinformation, and manipulation: How AI models inherit human biases and what we can do about it.


- Hallucinations in AI: Why generative AI models fabricate information and why it’s not a bug but a feature.


- AI governance and regulation: Why unchecked AI development is dangerous, and the urgent need for independent audits.


- The risks of OpenAI, Meta, and big tech dominance: Who is really in control of AI, and how can we ensure fair oversight?


- How companies should approach AI ethics: Practical strategies businesses can use to prevent harm while innovating responsibly.


Key Takeaways from the Episode:


1. AI as a Tool, Not a Mind:

Dr. Rumman Chowdhury debunks the myth that AI is alive or sentient. AI is a tool—just like a hammer—it can be used to build or destroy. The real issue isn’t AI itself, but how humans choose to use it.


2. Why AI Hallucinations Are Unavoidable:

Unlike traditional machine learning models, generative AI doesn’t compute facts; it predicts what words statistically fit together. This means hallucinations—where AI completely fabricates information—are not a flaw, but an inherent feature of how these models work.


3. The Hidden Biases in AI Models:

AI models are only as good as their training data, which often reflects human biases. Dr. Chowdhury discusses how AI systems unintentionally amplify biases in hiring, finance, and law enforcement, and what needs to be done to fix it.


4. The Illusion of AI Objectivity:

Many assume AI models are neutral, but the truth is that all models are built with human input, which means they carry subjective biases. Dr. Chowdhury warns that the real danger is allowing a handful of tech elites to dictate how AI shapes global narratives.


5. The Need for AI Red Teaming & Auditing:

Just like cybersecurity stress tests, AI models need independent stress tests to identify risks before they cause harm. Dr. Chowdhury shares her experience leading global AI red teaming exercises with scientists and governments to assess AI’s real-world impact.


6. OpenAI and the Power Problem:

Is OpenAI truly aligned with public interest? Dr. Chowdhury critiques how AI giants hold more power than entire nations and explains why AI must be treated as a public utility rather than a corporate monopoly.


7. Why AI Needs More Public Oversight:

Most AI governance is self-imposed by the companies that build these models. Dr. Chowdhury calls for third-party, independent AI audits, similar to financial auditing, to ensure transparency and accountability in AI decision-making.


8. The Role of Governments vs. Private AI Firms:

With AI development largely controlled by private companies, what role should governments play? Dr. Chowdhury argues that governments must create AI Safety Institutes, set up national regulations, and empower independent researchers to hold AI accountable.


Timestamps:


(00:00) - Introduction to Dr. Rumman Chowdhury and AI ethics


(03:03) - Why AI is just a tool (and how it’s being misused)


(04:58) - The difference between machine learning, deep learning, and generative AI


(07:43) - Why AI hallucinations will never fully go away


(11:46) - AI misinformation and the challenge of verifying truth


(13:26) - The ethical risks of OpenAI and Meta’s control over AI


(18:20) - The role of red teaming in stress-testing AI models


(30:26) - Should AI be treated as a public utility?


(35:43) - Government vs. private AI oversight—who should regulate AI?


(37:22) - The case for third-party AI audits


(53:51) - The future of AI governance and accountability


(61:03) - Closing thoughts and how AI can be a force for good


Join us in this deep dive into the world of AI ethics, accountability, and governance with one of the field’s top leaders.



Follow our host (@iwaheedo) for more insights on technology, civilization, and the future of AI.