In the rush to deliver data to AI projects, it’s all to easy for teams to pull data that’s most easily accessible, without given consideration to its nature and scope. Emily Jasper and Abby Simmons return to discuss ethical concerns about the data that feeds AI projects with host Eric Hanselman. AI implementations place a much greater burden on data quality than traditional IT projects. When data becomes the product, development practices, such as minimum viable product (MVP) releases, require that data be held to a much higher quality standard to address ethical concerns about its suitability. If a dataset contains bias or lacks representation for the community it serves, it will not only fall short in function, but can reinforce the bias and errors in the data. In effect, it becomes its own data poisoning attack, one of the key security concerns in AI applications.
Ethical approaches to AI applications have to focus on ensuring that outputs reflect the diverse nature of society and move beyond a narrow, middle of the road, average. They have to integrate perspectives and feedback from the full spectrum of the society they claim to represent. It involves additional work to achieve this and it can pay off in the expanded market it gives access to. At the same time, organizations need to put their capabilities to work to serve those parts of their community that don’t have access to AI’s benefits. This can help to keep marginalized segments of society from being left behind, in what is becoming the next chasm in the digital divide.
More S&P Global Content:
- Next in Tech | Episode 119: Defeating Digital Deficiencies
- 2025 Trends in Data, AI & Analytics
- Take 5: Data quality and AI — a bidirectional relationship
- Compliance automation, Part 1: Governance, risk and compliance, or something new?
Credits:
- Host/Author: Eric Hanselman
- Guests: Emily Jasper, Abby Simmons
- Producer/Editor: Donovan Menard and Odesha Chan
- Published With Assistance From: Sophie Carr, Feranmi Adeoshun, Kyra Smith
Other Resources: