Lisa Seacat DeLuca

To prevent biased algorithms you need to make sure you have unbiased training data on hand. You also need algorithms to be developed by a diverse set of people.
–Lisa Seacat DeLuca, Director of Offering Management and Distinguished Engineer for IBM Watson Internet of Things


Amber Grewal

How do we select vendors and what do we focus on? We must ask the question: what data did AI learn from, how did you arrive at the algorithm you’re using? Are you validating? What is the purpose of the data? How is the AI learning?
–Amber Grewal, Vice President of Global Talent Acquisition at IBM

Show Notes

Recently Amazon announced it had shut down a talent-finding algorithm built by its internal team. Why? Because it was perpetuating bias against women at the tech giant, which is unacceptable in today’s work environment.

With so many bots, algorithms and other tools being used to automate our work and personal lives, it’s important to think about how this affects each of us. Is there bias in the algorithms that drive our decisions? If so, how do we mitigate that?

In today’s episode, Ben talks with two IBM leaders with diverse perspectives on AI, bias, and more. Lisa Seacat DeLuca and Amber Grewal both join the show to talk about how they see AI benefiting the workplace but also how to watch for bias and prevent it from creeping into the finished product.

Links to the references made by Lisa and Amber on the podcast:

To learn more, be sure to check out the following resources from IBM:

Twitter: @IBMWatsonTalent 

What are your thoughts? Can algorithms be trained, or will they always be biased to some degree? What can other firms learn from IBM’s trailblazing approach?