Featuring: Dr. Merrick Osborne Racial Equity Postdoctoral Scholar, Berkeley Haas
Machine learning models have become ubiquitous tools that are central to our everyday life. Their importance cannot be understated: they both inform our decision-making and hint at how those around us think. Yet, these seemingly objective “machines” are more human than they may seem. Their output is frequently impacted by consequential baked-in biases, leading them to produce output that fuels discrimination and inequality by disproportionately favoring some populations over others. In this talk, Dr. Osborne considers how organizations – and the people in them – can unintentionally input bias into these models. He then uses insights from psychology to discuss how they can prevent this from occurring.